19 resultados para Superiority

em Aston University Research Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Outcomes measures, which is the measurement of effectiveness of interventions and services has been propelled onto the health service agenda since the introduction of the internal market in the 1990s. It arose as a result of the escalating cost of inpatient care, the need to identify what interventions work and in what situations, and the desire for effective information by service users enabled by the consumerist agenda introduced by Working for Patients white paper. The research reported in this thesis is an assessment of the readiness of the forensic mental health service to measure outcomes of interventions. The research examines the type, prevalence and scope of use of outcomes measures, and further seeks a consensus of views of key stakeholders on the priority areas for future development. It discusses the theoretical basis for defining health and advocates the argument that the present focus on measuring effectiveness of care is misdirected without the input of users, particularly patients in their care, drawing together the views of the many stakeholders who have an interest in the provision of care in the service. The research further draws on the theory of structuration to demonstrate the degree to which a duality of action, which is necessary for the development, and use of outcomes measures is in place within the service. Consequently, it highlights some of the hurdles that need to be surmounted before effective measurement of health gain can be developed in the field of study. It concludes by advancing the view that outcomes research can enable practitioners to better understand the relationship between the illness of the patient and the efficacy of treatment. This understanding it is argued would contribute to improving dialogue between the health care practitioner and the patient, and further providing the information necessary for moving away from untested assumptions, which are numerous in the field about the superiority of one treatment approach over another.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research described in this study replicates and extends the Brady et al., [Brady, M. K., Knight, G. A., Cronin Jr. J. Toma, G., Hult, M. and Keillor, B. D. (2005), emoving the Contextual Lens: A Multinational, Mult-setting Comparison of Service Evaluation Models, Journal of Retailing, 81(3), pp. 215-230] study suggestion that future research in service evaluations should focus on emerging service economies such as China. The intent of the research was to examine the suitability of the models suggested by Brady and colleagues in the Chinese market. The replication somewhat successfully duplicated their finding as to the superiority of the comprehensive service evaluation model. Additionally, we also sought to examine as to whether the service evaluation model is gender invariant. Our findings indicate that there are significant differences between gender. These findings are discussed relative to the limitations associated with the study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis various mathematical methods of studying the transient and dynamic stabiIity of practical power systems are presented. Certain long established methods are reviewed and refinements of some proposed. New methods are presented which remove some of the difficulties encountered in applying the powerful stability theories based on the concepts of Liapunov. Chapter 1 is concerned with numerical solution of the transient stability problem. Following a review and comparison of synchronous machine models the superiority of a particular model from the point of view of combined computing time and accuracy is demonstrated. A digital computer program incorporating all the synchronous machine models discussed, and an induction machine model, is described and results of a practical multi-machine transient stability study are presented. Chapter 2 reviews certain concepts and theorems due to Liapunov. In Chapter 3 transient stability regions of single, two and multi~machine systems are investigated through the use of energy type Liapunov functions. The treatment removes several mathematical difficulties encountered in earlier applications of the method. In Chapter 4 a simple criterion for the steady state stability of a multi-machine system is developed and compared with established criteria and a state space approach. In Chapters 5, 6 and 7 dynamic stability and small signal dynamic response are studied through a state space representation of the system. In Chapter 5 the state space equations are derived for single machine systems. An example is provided in which the dynamic stability limit curves are plotted for various synchronous machine representations. In Chapter 6 the state space approach is extended to multi~machine systems. To draw conclusions concerning dynamic stability or dynamic response the system eigenvalues must be properly interpreted, and a discussion concerning correct interpretation is included. Chapter 7 presents a discussion of the optimisation of power system small sjgnal performance through the use of Liapunov functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study is to develop econometric models to better understand the economic factors affecting inbound tourist flows from each of six origin countries that contribute to Hong Kong’s international tourism demand. To this end, we test alternative cointegration and error correction approaches to examine the economic determinants of tourist flows to Hong Kong, and to produce accurate econometric forecasts of inbound tourism demand. Our empirical findings show that permanent income is the most significant determinant of tourism demand in all models. The variables of own price, weighted substitute prices, trade volume, the share price index (as an indicator of changes in wealth in origin countries), and a dummy variable representing the Beijing incident (1989) are also found to be important determinants for some origin countries. The average long-run income and own price elasticity was measured at 2.66 and – 1.02, respectively. It was hypothesised that permanent income is a better explanatory variable of long-haul tourism demand than current income. A novel approach (grid search process) has been used to empirically derive the weights to be attached to the lagged income variable for estimating permanent income. The results indicate that permanent income, estimated with empirically determined relatively small weighting factors, was capable of producing better results than the current income variable in explaining long-haul tourism demand. This finding suggests that the use of current income in previous empirical tourism demand studies may have produced inaccurate results. The share price index, as a measure of wealth, was also found to be significant in two models. Studies of tourism demand rarely include wealth as an explanatory forecasting long-haul tourism demand. However, finding a satisfactory proxy for wealth common to different countries is problematic. This study indicates with the ECM (Error Correction Models) based on the Engle-Granger (1987) approach produce more accurate forecasts than ECM based on Pesaran and Shin (1998) and Johansen (1988, 1991, 1995) approaches for all of the long-haul markets and Japan. Overall, ECM produce better forecasts than the OLS, ARIMA and NAÏVE models, indicating the superiority of the application of a cointegration approach for tourism demand forecasting. The results show that permanent income is the most important explanatory variable for tourism demand from all countries but there are substantial variations between countries with the long-run elasticity ranging between 1.1 for the U.S. and 5.3 for U.K. Price is the next most important variable with the long-run elasticities ranging between -0.8 for Japan and -1.3 for Germany and short-run elasticities ranging between – 0.14 for Germany and -0.7 for Taiwan. The fastest growing market is Mainland China. The findings have implications for policies and strategies on investment, marketing promotion and pricing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is undertaken in the attempt to understand the processes at work at the cutting edge of the twist drill. Extensive drill life testing performed by the University has reinforced a survey of previously published information. This work demonstrated that there are two specific aspects of drilling which have not previously been explained comprehensively. The first concerns the interrelating of process data between differing drilling situations, There is no method currently available which allows the cutting geometry of drilling to be defined numerically so that such comparisons, where made, are purely subjective. Section one examines this problem by taking as an example a 4.5mm drill suitable for use with aluminium. This drill is examined using a prototype solid modelling program to explore how the required numerical information may be generated. The second aspect is the analysis of drill stiffness. What aspects of drill stiffness provide the very great difference in performance between short flute length, medium flute length and long flute length drills? These differences exist between drills of identical point geometry and the practical superiority of short drills has been known to shop floor drilling operatives since drilling was first introduced. This problem has been dismissed repeatedly as over complicated but section two provides a first approximation and shows that at least for smaller drills of 4. 5mm the effects are highly significant. Once the cutting action of the twist drill is defined geometrically there is a huge body of machinability data that becomes applicable to the drilling process. Work remains to interpret the very high inclination angles of the drill cutting process in terms of cutting forces and tool wear but aspects of drill design may already be looked at in new ways with the prospect of a more analytical approach rather than the present mix of experience and trial and error. Other problems are specific to the twist drill, such as the behaviour of the chips in the flute. It is now possible to predict the initial direction of chip flow leaving the drill cutting edge. For the future the parameters of further chip behaviour may also be explored within this geometric model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis addresses the kineto-elastodynamic analysis of a four-bar mechanism running at high-speed where all links are assumed to be flexible. First, the mechanism, at static configurations, is considered as structure. Two methods are used to model the system, namely the finite element method (FEM) and the dynamic stiffness method. The natural frequencies and mode shapes at different positions from both methods are calculated and compared. The FEM is used to model the mechanism running at high-speed. The governing equations of motion are derived using Hamilton's principle. The equations obtained are a set of stiff ordinary differential equations with periodic coefficients. A model is developed whereby the FEM and the dynamic stiffness method are used conjointly to provide high-precision results with only one element per link. The principal concern of the mechanism designer is the behaviour of the mechanism at steady-state. Few algorithms have been developed to deliver the steady-state solution without resorting to costly time marching simulation. In this study two algorithms are developed to overcome the limitations of the existing algorithms. The superiority of the new algorithms is demonstrated. The notion of critical speeds is clarified and a distinction is drawn between "critical speeds", where stresses are at a local maximum, and "unstable bands" where the mechanism deflections will grow boundlessly. Floquet theory is used to assess the stability of the system. A simple method to locate the critical speeds is derived. It is shown that the critical speeds of the mechanism coincide with the local maxima of the eigenvalues of the transition matrix with respect to the rotational speed of the mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erbium-doped fibre amplifiers (EDFA’s) are a key technology for the design of all optical communication systems and networks. The superiority of EDFAs lies in their negligible intermodulation distortion across high speed multichannel signals, low intrinsic losses, slow gain dynamics, and gain in a wide range of optical wavelengths. Due to long lifetime in excited states, EDFAs do not oppose the effect of cross-gain saturation. The time characteristics of the gain saturation and recovery effects are between a few hundred microseconds and 10 milliseconds. However, in wavelength division multiplexed (WDM) optical networks with EDFAs, the number of channels traversing an EDFA can change due to the faulty link of the network or the system reconfiguration. It has been found that, due to the variation in channel number in the EDFAs chain, the output system powers of surviving channels can change in a very short time. Thus, the power transient is one of the problems deteriorating system performance. In this thesis, the transient phenomenon in wavelength routed WDM optical networks with EDFA chains was investigated. The task was performed using different input signal powers for circuit switched networks. A simulator for the EDFA gain dynamicmodel was developed to compute the magnitude and speed of the power transients in the non-self-saturated EDFA both single and chained. The dynamic model of the self-saturated EDFAs chain and its simulator were also developed to compute the magnitude and speed of the power transients and the Optical signal-to-noise ratio (OSNR). We found that the OSNR transient magnitude and speed are a function of both the output power transient and the number of EDFAs in the chain. The OSNR value predicts the level of the quality of service in the related network. It was found that the power transients for both self-saturated and non-self-saturated EDFAs are close in magnitude in the case of gain saturated EDFAs networks. Moreover, the cross-gain saturation also degrades the performance of the packet switching networks due to varying traffic characteristics. The magnitude and the speed of output power transients increase along the EDFAs chain. An investigation was done on the asynchronous transfer mode (ATM) or the WDM Internet protocol (WDM-IP) traffic networks using different traffic patterns based on the Pareto and Poisson distribution. The simulator is used to examine the amount and speed of the power transients in Pareto and Poisson distributed traffic at different bit rates, with specific focus on 2.5 Gb/s. It was found from numerical and statistical analysis that the power swing increases if the time interval of theburst-ON/burst-OFF is long in the packet bursts. This is because the gain dynamics is fast during strong signal pulse or with long duration pulses, which is due to the stimulatedemission avalanche depletion of the excited ions. Thus, an increase in output power levelcould lead to error burst which affects the system performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research described in this study replicates and extends the Brady et al., [Brady, M. K., Knight, G. A., Cronin Jr. J. Toma, G., Hult, M. and Keillor, B. D. (2005), emoving the Contextual Lens: A Multinational, Mult-setting Comparison of Service Evaluation Models, Journal of Retailing, 81(3), pp. 215-230] study suggestion that future research in service evaluations should focus on emerging service economies such as China. The intent of the research was to examine the suitability of the models suggested by Brady and colleagues in the Chinese market. The replication somewhat successfully duplicated their finding as to the superiority of the comprehensive service evaluation model. Additionally, we also sought to examine as to whether the service evaluation model is gender invariant. Our findings indicate that there are significant differences between gender. These findings are discussed relative to the limitations associated with the study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors. © 2007 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Retinal Vessel Analyser (RVA) is a commercially available ophthalmoscopic instrument capable of acquiring vessel diameter fluctuations in real time and in high temporal resolution. Visual stimulation by means of flickering light is a unique exploration tool of neurovascular coupling in the human retina. Vessel reactivity as mediated by local vascular endothelial vasodilators and vasoconstrictors can be assessed non-invasively, in vivo. In brief, the work in this thesis • deals with interobserver and intraobserver reproducibility of the flicker responses in healthy volunteers • explains the superiority of individually analysed reactivity parameters over vendorgenerated output • links in static retinal measures with dynamic ones • highlights practical limitations in the use of the RVA that may undermine its clinical usefulness • provides recommendations for standardising measurements in terms of vessel location and vessel segment length and • presents three case reports of essential hypertensives in a -year follow-up. Strict standardisation of measurement procedures is a necessity when utilising the RVA system. Agreement between research groups on implemented protocols needs to be met, before it could be considered a clinically useful tool in detecting or predicting microvascular dysfunction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explains some drawbacks on previous approaches for detecting influential observations in deterministic nonparametric data envelopment analysis models as developed by Yang et al. (Annals of Operations Research 173:89-103, 2010). For example efficiency scores and relative entropies obtained in this model are unimportant to outlier detection and the empirical distribution of all estimated relative entropies is not a Monte-Carlo approximation. In this paper we developed a new method to detect whether a specific DMU is truly influential and a statistical test has been applied to determine the significance level. An application for measuring efficiency of hospitals is used to show the superiority of this method that leads to significant advancements in outlier detection. © 2014 Springer Science+Business Media New York.