22 resultados para pseudo-utility
em Aston University Research Archive
Resumo:
Regulation is subject to information asymmetries that can lead to allocative and productive inefficiencies. One solution, suggested by Shleifer in 1985 and now adopted by many regulatory bodies round the world, is 'benchmarking', which is sometimes called 'yardstick competition'. In this paper we consider Shleifer's original approach to benchmarking and contrast this with the actual use of benchmarking by UK regulatory bodies in telecommunications, water and the energy sector since the privatizations of the 1980s and early 1990s. We find that benchmarking plays only one part and sometimes a small part in the setting of regulatory price caps in the UK. We also find that in practice benchmarking has been subject to a number of difficulties, which mean that it is never likely to be more than one tool in the regulator's armoury. The UK's experience provides lessons for regulation internationally. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper reviews and evaluates the literature concerning the privatisation and regulation of the utility industries in the UK. The economic theories behind and political reasons for the programme are considered to give the reader an appreciation of the environment from which these organisations were born and the implications for their continued existence. Once this has been established the paper then considers the role that accounting has played and will continue to play in these industries. This includes consideration of the technical questions which these new organisations are asking and also the role that accounting has in the organisational structure and culture. It draws as a conclusion that these recently privatised industries provide a unique and rich source for further accounting research.
Resumo:
This thesis has two aims. First, it sets out to develop an alternative methodology for the investigation of risk homeostasis theory (RHT). It is argued that the current methodologies of the pseudo-experimental design and post hoc analysis of road-traffic accident data both have their limitations, and that the newer 'game' type simulation exercises are also, but for different reasons, incapable of testing RHT predictions. The alternative methodology described here is based on the simulation of physical risk with intrinsic reward rather than a 'points pay-off'. The second aim of the thesis is to examine a number of predictions made by RHT through the use of this alternative methodology. Since the pseudo-experimental design and post hoc analysis of road-traffic data are both ill-suited to the investigation of that part of RHT which deals with the role of utility in determining risk-taking behaviour in response to a change in environmental risk, and since the concept of utility is critical to RHT, the methodology reported here is applied to the specific investigation of utility. Attention too is given to the question of which behavioural pathways carry the homeostasis effect, and whether those pathways are 'local' to the nature of the change in environmental risk. It is suggested that investigating RHT through this new methodology holds a number of advantages and should be developed further in an attempt to answer the RHT question. It is suggested too that the methodology allows RHT to be seen in a psychological context, rather than the statistical context that has so far characterised its investigation. The experimental findings reported here are in support of hypotheses derived from RHT and would therefore seem to argue for the importance of the individual and collective target level of risk, as opposed to the level of environmental risk, as the major determinant of accident loss.
Resumo:
The two elcctrophysiological tests currently favoured in the clinical measurement of hearing threshold arc the brainstorm evoked potential (BAEP) and the slow vertex response (SVR). However, both tests possess disadvantages. The BAEP is the test of choice in younger patients as it is stable at all levels of arousal, but little information has been obtained to date at a range of frequencies. The SVR is frequency specific but is unreliable in certain adult subjects and is unstable during sleep or in young children. These deficiencies have prompted research into a third group of potentials, the middle latency response (MLR) and the 40HZ responses. This research has compared the SVR and 40HZ response in waking adults and reports that the 40HZ test can provide a viable alternative to the SVR provided that a high degree of subject relaxation is ensured. A second study examined the morphology of the MLR and 40HZ during sleep. This work suggested that these potentials arc markedly different during sleep and that methodological factors have been responsible for masking these changes in previous studies. The clinical possibilities of tone pip BAEPs were then examined as these components were proved to be the only stable responses present in sleep. It was found that threshold estimates to 5OOHz, lOOOHz and 4000Hz stimuli could be made to within 15dBSL in most cases. A final study looked more closely at methods of obtaining frequency specific information in sleeping subjects. Threshold estimates were made using established BAEP parameters and this was compared to a 40HZ procedure which recorded a series of BAEPs over a 100msec. time sweep. Results indicated that the 40mHz procedure was superior to existing techniques in estimating threshold to low frequency stimuli. This research has confirmed a role for the MLR and 40Hz response as alternative measures of hearing capability in waking subjects and proposes that the 40Hz technique is useful in measuring frequency specific thresholds although the responses recorded derive primarily from the brainstem.
Resumo:
In this work we propose the hypothesis that replacing the current system of representing the chemical entities known as amino acids using Latin letters with one of several possible alternative symbolic representations will bring significant benefits to the human construction, modification, and analysis of multiple protein sequence alignments. We propose ways in which this might be done without prescribing the choice of actual scripts used. Specifically we propose and explore three ways to encode amino acid texts using novel symbolic alphabets free from precedents. Primary orthographic encoding is the direct substitution of a new alphabet for the standard, Latin-based amino acid code. Secondary encoding imposes static residue groupings onto the orthography of the alphabet by manipulating the shape and/or orientation of amino acid symbols. Tertiary encoding renders each residue as a composite symbol; each such symbol thus representing several alternative amino acid groupings simultaneously. We also propose that the use of a new group-focussed alphabet will free the colouring of amino acid residues often used as a tool to facilitate the representation or construction of multiple alignments for other purposes, possibly to indicate dynamic properties of an alignment such as position-wise residue conservation.
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to support meaningful lab-based usability evaluations of mobile technologies. In particular, we report on a comparative study of (a) a standard paper prototype of a mobile application used to perform an early-phase seated (static) usability evaluation, and (b) a pseudo-paper prototype created from the paper prototype used to perform an early-phase,contextually-relevant, mobile usability evaluation. We draw some initial conclusions regarding whether it is worth the added effort of conducting a usability evaluation of a pseudo-paper prototype in a contextually-relevant setting during early-phase user interface development.
Resumo:
Commentary on focal article by Tannenbaum et al., Teams are changing: Are research and practice evolving fast enough?
Resumo:
Distributed network utility maximization (NUM) is receiving increasing interests for cross-layer optimization problems in multihop wireless networks. Traditional distributed NUM algorithms rely heavily on feedback information between different network elements, such as traffic sources and routers. Because of the distinct features of multihop wireless networks such as time-varying channels and dynamic network topology, the feedback information is usually inaccurate, which represents as a major obstacle for distributed NUM application to wireless networks. The questions to be answered include if distributed NUM algorithm can converge with inaccurate feedback and how to design effective distributed NUM algorithm for wireless networks. In this paper, we first use the infinitesimal perturbation analysis technique to provide an unbiased gradient estimation on the aggregate rate of traffic sources at the routers based on locally available information. On the basis of that, we propose a stochastic approximation algorithm to solve the distributed NUM problem with inaccurate feedback. We then prove that the proposed algorithm can converge to the optimum solution of distributed NUM with perfect feedback under certain conditions. The proposed algorithm is applied to the joint rate and media access control problem for wireless networks. Numerical results demonstrate the convergence of the proposed algorithm. © 2013 John Wiley & Sons, Ltd.
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to support meaningful lab-based usability evaluations of mobile technologies. In particular, we report on a comparative study of (a) a standard paper prototype of a mobile application used to perform an early-phase seated (static) usability evaluation, and (b) a pseudo-paper prototype created from the paper prototype used to perform an early-phase,contextually-relevant, mobile usability evaluation. We draw some initial conclusions regarding whether it is worth the added effort of conducting a usability evaluation of a pseudo-paper prototype in a contextually-relevant setting during early-phase user interface development.
Resumo:
Distributed network utility maximization (NUM) is receiving increasing interests for cross-layer optimization problems in multihop wireless networks. Traditional distributed NUM algorithms rely heavily on feedback information between different network elements, such as traffic sources and routers. Because of the distinct features of multihop wireless networks such as time-varying channels and dynamic network topology, the feedback information is usually inaccurate, which represents as a major obstacle for distributed NUM application to wireless networks. The questions to be answered include if distributed NUM algorithm can converge with inaccurate feedback and how to design effective distributed NUM algorithm for wireless networks. In this paper, we first use the infinitesimal perturbation analysis technique to provide an unbiased gradient estimation on the aggregate rate of traffic sources at the routers based on locally available information. On the basis of that, we propose a stochastic approximation algorithm to solve the distributed NUM problem with inaccurate feedback. We then prove that the proposed algorithm can converge to the optimum solution of distributed NUM with perfect feedback under certain conditions. The proposed algorithm is applied to the joint rate and media access control problem for wireless networks. Numerical results demonstrate the convergence of the proposed algorithm. © 2013 John Wiley & Sons, Ltd.
Resumo:
About one third of patients with epilepsy are refractory to medical treatment. For these patients, alternative treatment options include implantable neurostimulation devices such as vagus nerve stimulation (VNS), deep brain stimulation (DBS), and responsive neurostimulation systems (RNS). We conducted a systematic literature review to assess the available evidence on the clinical efficacy of these devices in patients with refractory epilepsy across their lifespan. VNS has the largest evidence base, and numerous randomized controlled trials and open-label studies support its use in the treatment of refractory epilepsy. It was approved by the US Food and Drug Administration in 1997 for treatment of partial seizures, but has also shown significant benefit in the treatment of generalized seizures. Results in adult populations have been more encouraging than in pediatric populations, where more studies are required. VNS is considered a safe and well-tolerated treatment, and serious side effects are rare. DBS is a well-established treatment for several movement disorders, and has a small evidence base for treatment of refractory epilepsy. Stimulation of the anterior nucleus of the thalamus has shown the most encouraging results, where significant decreases in seizure frequency were reported. Other potential targets include the centromedian thalamic nucleus, hippocampus, cerebellum, and basal ganglia structures. Preliminary results on RNS, new-generation implantable neurostimulation devices which stimulate brain structures only when epileptic activity is detected, are encouraging. Overall, implantable neurostimulation devices appear to be a safe and beneficial treatment option for patients in whom medical treatment has failed to adequately control their epilepsy. Further large-scale randomized controlled trials are required to provide a sufficient evidence base for the inclusion of DBS and RNS in clinical guidelines.
Resumo:
DNA-binding proteins are crucial for various cellular processes and hence have become an important target for both basic research and drug development. With the avalanche of protein sequences generated in the postgenomic age, it is highly desired to establish an automated method for rapidly and accurately identifying DNA-binding proteins based on their sequence information alone. Owing to the fact that all biological species have developed beginning from a very limited number of ancestral species, it is important to take into account the evolutionary information in developing such a high-throughput tool. In view of this, a new predictor was proposed by incorporating the evolutionary information into the general form of pseudo amino acid composition via the top-n-gram approach. It was observed by comparing the new predictor with the existing methods via both jackknife test and independent data-set test that the new predictor outperformed its counterparts. It is anticipated that the new predictor may become a useful vehicle for identifying DNA-binding proteins. It has not escaped our notice that the novel approach to extract evolutionary information into the formulation of statistical samples can be used to identify many other protein attributes as well.