23 resultados para merits of mandatory reporting of neglect

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been a revival of interest in economic techniques to measure the value of a firm through the use of economic value added as a technique for measuring such value to shareholders. This technique, based upon the concept of economic value equating to total value, is founded upon the assumptions of classical liberal economic theory. Such techniques have been subject to criticism both from the point of view of the level of adjustment to published accounts needed to make the technique work and from the point of view of the validity of such techniques in actually measuring value in a meaningful context. This paper critiques economic value added techniques as a means of calculating changes in shareholder value, contrasting such techniques with more traditional techniques of measuring value added. It uses the company Severn Trent plc as an actual example in order to evaluate and contrast the techniques in action. The paper demonstrates discrepancies between the calculated results from using economic value added analysis and those reported using conventional accounting measures. It considers the merits of the respective techniques in explaining shareholder and managerial behaviour and the problems with using such techniques in considering the wider stakeholder concept of value. It concludes that this economic value added technique has merits when compared with traditional accounting measures of performance but that it does not provide the universal panacea claimed by its proponents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims To date, there is no convincing evidence that non-insulin treated patients who undertake self-blood glucose monitoring (SBGM) have better glycaemic control than those who test their urine. This has led to a recommendation that non-insulin dependent patients undertake urine testing, which is the cheaper option. This recommendation does not take account of patients' experiences and views. This study explores the respective merits of urine testing and SBGM from the perspectives of newly diagnosed patients with Type 2 diabetes. Methods Qualitative study using repeat in-depth interviews with 40 patients. Patients were interviewed three times at 6-monthly intervals over 1 year. Patients were recruited from hospital clinics and general practices in Lothian, Scotland. The study was informed by grounded theory, which involves concurrent data collection and analysis. Results Patients reported strongly negative views of urine testing, particularly when they compared it with SBGM. Patients perceived urine testing as less convenient, less hygienic and less accurate than SBGM. Most patients assumed that blood glucose meters were given to those with a more advanced or serious form of diabetes. This could have implications for how they thought about their own disease. Patients often interpreted negative urine results as indicating that they could not have diabetes. Conclusions Professionals should be aware of the meanings and understandings patients attach to the receipt and use of different types of self-monitoring equipment. Guidelines that promote the use of consistent criteria for equipment allocation are required. The manner in which negative urine results are conveyed needs to be reconsidered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tuberculosis is one of the most devastating diseases in the world primarily due to several decades of neglect and an emergence of multidrug-resitance strains (MDR) of M. tuberculosis together with the increased incidence of disseminated infections produced by other mycobacterium in AIDS patients. This has prompted the search for new antimycobacterial drugs. A series of pyridine-2-, pyridine-3-, pyridine-4-, pyrazine and quinoline-2-carboxamidrazone derivatives and new classes of carboxamidrazone were prepared in an automated fashion and by traditional synthesis. Over nine hundred synthesized compounds were screened for their anti mycobacterial activity against M. fortutium (NGTG 10394) as a surrogate for M. tuberculosis. The new classes of amidrazones were also screened against tuberculosis H37 Rv and antimicrobial activities against various bacteria. Fifteen tested compounds were found to provide 90-100% inhibition of mycobacterium growth of M. tuberculosis H37 Rv in the primary screen at 6.25 μg mL-1. The most active compound in the carboxamidrazone amide series had an MIG value of 0.1-2 μg mL-1 against M. fortutium. The enzyme dihydrofolate reductase (DHFR) has been a drug-design target for decades. Blocking of the enzymatic activity of DHFR is a key element in the treatment of many diseases, including cancer, bacterial and protozoal infection. The x-ray structure of DHFR from M. tuberculosis and human DHFR were found to have differences in substrate binding site. The presence of glycerol molecule in the Xray structure from M. tuberculosis DHFR provided opportunity to design new antifolates. The new antifolates described herein were designed to retain the pharmcophore of pyrimethamine (2,4- diamino-5(4-chlorophenyl)-6-ethylpyrimidine), but encompassing a range of polar groups that might interact with the M. tuberculosis DHFR glycerol binding pockets. Finally, the research described in this thesis contributes to the preparation of molecularly imprinted polymers for the recognition of 2,4-diaminopyrimidine for the binding the target. The formation of hydrogen bonding between the model functional monomer 5-(4-tert-butyl-benzylidene)-pyrimidine-2,4,6-trione and 2,4-diaminopyrimidine in the pre-polymerisation stage was verified by 1H-NMR studies. Having proven that 2,4-diaminopyrimidine interacts strongly with the model 5-(4-tert-butylbenzylidene)- pyrimidine-2,4,6-trione, 2,4-diaminopyrimidine-imprinted polymers were prepared using a novel cyclobarbital derived functional monomer, acrylic acid 4-(2,4,6-trioxo-tetrahydro-pyrimidin-5- ylidenemethyl)phenyl ester, capable of multiple hydrogen bond formation with the 2,4- diaminopyrimidine. The recognition property of the respective polymers toward the template and other test compounds was evaluated by fluorescence. The results demonstrate that the polymers showed dose dependent enhancement of fluorescence emissions. In addition, the results also indicate that synthesized MIPs have higher 2,4-diaminopyrimidine binding ability as compared with corresponding non-imprinting polymers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Substantial altimetry datasets collected by different satellites have only become available during the past five years, but the future will bring a variety of new altimetry missions, both parallel and consecutive in time. The characteristics of each produced dataset vary with the different orbital heights and inclinations of the spacecraft, as well as with the technical properties of the radar instrument. An integral analysis of datasets with different properties offers advantages both in terms of data quantity and data quality. This thesis is concerned with the development of the means for such integral analysis, in particular for dynamic solutions in which precise orbits for the satellites are computed simultaneously. The first half of the thesis discusses the theory and numerical implementation of dynamic multi-satellite altimetry analysis. The most important aspect of this analysis is the application of dual satellite altimetry crossover points as a bi-directional tracking data type in simultaneous orbit solutions. The central problem is that the spatial and temporal distributions of the crossovers are in conflict with the time-organised nature of traditional solution methods. Their application to the adjustment of the orbits of both satellites involved in a dual crossover therefore requires several fundamental changes of the classical least-squares prediction/correction methods. The second part of the thesis applies the developed numerical techniques to the problems of precise orbit computation and gravity field adjustment, using the altimetry datasets of ERS-1 and TOPEX/Poseidon. Although the two datasets can be considered less compatible that those of planned future satellite missions, the obtained results adequately illustrate the merits of a simultaneous solution technique. In particular, the geographically correlated orbit error is partially observable from a dataset consisting of crossover differences between two sufficiently different altimetry datasets, while being unobservable from the analysis of altimetry data of both satellites individually. This error signal, which has a substantial gravity-induced component, can be employed advantageously in simultaneous solutions for the two satellites in which also the harmonic coefficients of the gravity field model are estimated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Progressive addition spectacle lenses (PALs) have now become the method of choice for many presbyopic individuals to alleviate the visual problems of middle-age. Such lenses are difficult to assess and characterise because of their lack of discrete geographical locators of their key features. A review of the literature (mostly patents) describing the different designs of these lenses indicates the range of approaches to solving the visual problem of presbyopia. However, very little is published about the comparative optical performance of these lenses. A method is described here based on interferometry for the assessment of PALs, with a comparison of measurements made on an automatic focimeter. The relative merits of these techniques are discussed. Although the measurements are comparable, it is considered that the interferometry method is more readily automated, and would be ultimately capable of producing a more rapid result.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distortion or deprivation of vision during an early `critical' period of visual development can result in permanent visual impairment which indicates the need to identify and treat visually at-risk individuals early. A significant difficulty in this respect is that conventional, subjective methods of visual acuity determination are ineffective before approximately three years of age. In laboratory studies, infant visual function has been quantified precisely, using objective methods based on visual evoked potentials (VEP), preferential looking (PL) and optokinetic nystagmus (OKN) but clinical assessment of infant vision has presented a particular difficulty. An initial aim of this study was to evaluate the relative clinical merits of the three techniques. Clinical derivatives were devised, the OKN method proved unsuitable but the PL and VEP methods were evaluated in a pilot study. Most infants participating in the study had known ocular and/or neurological abnormalities but a few normals were included for comparison. The study suggested that the PL method was more clinically appropriate for the objective assessment of infant acuity. A study of normal visual development from birth to one year was subsequently conducted. Observations included cycloplegic refraction, ophthalmoscopy and preferential looking visual acuity assessment using horizontally and vertically oriented square wave gratings. The aims of the work were to investigate the efficiency and sensitivity of the technique and to study possible correlates of visual development. The success rate of the PL method varied with age; 87% of newborns and 98% of infants attending follow-up successfully completed at least one acuity test. Below two months monocular acuities were difficult to secure; infants were most testable around six months. The results produced were similar to published data using the acuity card procedure and slightly lower than, but comparable with acuity data derived using extended PL methods. Acuity development was not impaired in infants found to have retinal haemorrhages as newborns. A significant relationship was found between newborn binocular acuity and anisometropia but not with other refractive findings. No strong or consistent correlations between grating acuity and refraction were found for three, six or twelve months olds. Improvements in acuity and decreases in levels of hyperopia over the first week of life were suggestive of recovery from minor birth trauma. The refractive data was analysed separately to investigate the natural history of refraction in normal infants. Most newborns (80%) were hyperopic, significant astigmatism was found in 86% and significant anisometropia in 22%. No significant alteration in spherical equivalent refraction was noted between birth and three months, a significant reduction in hyperopia was evident by six months and this trend continued until one year. Observations on the astigmatic component of the refractive error revealed a rather erratic series of changes which would be worthy of further investigation since a repeat refraction study suggested difficulties in obtaining stable measurements in newborns. Astigmatism tended to decrease between birth and three months, increased significantly from three to six months and decreased significantly from six to twelve months. A constant decrease in the degree of anisometropia was evident throughout the first year. These findings have implications for the correction of infantile refractive error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of organically modified clay on the morphology, rheology and mechanical properties of high-density polyethylene (HDPE) and polyamide 6 (PA6) blends (HDPE/PA6 = 75/25 parts) is studied. Virgin and filled blends were prepared by melt compounding the constituents using a twin-screw extruder. The influence of the organoclay on the morphology of the hybrid was deeply investigated by means of wide-angle X-ray diffractometry, transmission and scanning electron microscopies and quantitative extraction experiments. It has been found that the organoclay exclusively places inside the more hydrophilic polyamide phase during the melt compounding. The extrusion process promotes the formation of highly elongated and separated organoclay-rich PA6 domains. Despite its low volume fraction, the filled minor phase eventually merges once the extruded pellets are melted again, giving rise to a co-continuous microstructure. Remarkably, such a morphology persists for long time in the melt state. A possible compatibilizing action related to the organoclay has been investigated by comparing the morphology of the hybrid blend with that of a blend compatibilized using an ethylene–acrylic acid (EAA) copolymer as a compatibilizer precursor. The former remains phase separated, indicating that the filler does not promote the enhancement of the interfacial adhesion. The macroscopic properties of the hybrid blend were interpreted in the light of its morphology. The melt state dynamics of the materials were probed by means of linear viscoelastic measurements. Many peculiar rheological features of polymer-layered silicate nanocomposites based on single polymer matrix were detected for the hybrid blend. The results have been interpreted proposing the existence of two distinct populations of dynamical species: HDPE not interacting with the filler, and a slower species, constituted by the organoclay-rich polyamide phase, which slackened dynamics stabilize the morphology in the melt state. In the solid state, both the reinforcement effect of the filler and the co-continuous microstructure promote the enhancement of the tensile modulus. Our results demonstrate that adding nanoparticles to polymer blends allows tailoring the final properties of the hybrid, potentially leading to high-performance materials which combine the advantages of polymer blends and the merits of polymer nanocomposites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thus far, achieving net biodiversity gains through major urban developments has been neither common nor straightforward - despite the presence of incentives, regulatory contexts, and ubiquitous practical guidance tools. A diverse set of obstructions, occurring within different spatial, temporal and actor hierarchies, are experienced by practitioners and render the realisation of maximised biodiversity, a rarity. This research aims to illuminate why this is so, and what needs to be changed to rectify the situation. To determine meaningful findings and conclusions, capable of assisting applied contexts and accommodating a diverse range of influences, a ‘systems approach’ was adopted. This approach led to the use of a multi-strategy research methodology, to identify the key obstructions and solutions to protecting and enhancing biodiversity - incorporating the following methods: action research, a questionnaire to local government ecologists, interviews and personal communications with leading players, and literature reviews. Nevertheless, ‘case studies’ are the predominant research method, the focus being a ‘nested’ case study looking at strategic issues of the largest regeneration area in Europe ‘the Thames Gateway’, and the largest individual mixeduse mega-development in the UK (at the time of planning consent) ‘Eastern Quarry 2’ - set within the Gateway. A further key case study, focussing on the Central Riverside development in Sheffield, identifies the merits of competition and partnership. The nested cases, theories and findings show that the strategic scale - generally relating to governance and prioritisation - impacts heavily upon individual development sites. It also enables the identification of various processes, mechanisms and issues at play on the individual development sites, which primarily relate to project management, planning processes, skills and transdisciplinary working, innovative urban biodiversity design capabilities, incentives, organisational cultures, and socio-ecological resilience. From these findings a way forward is mapped, spanning aspects from strategic governance to detailed project management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parental reports suggest that difficulties related to child-feeding and children's eating behaviour are extremely common. While 'fussy eating' does not pose an immediate threat to health, over the long-term, consumption of a poor diet can contribute to the development of a range of otherwise preventable diseases. In addition, the stress and anxiety that can surround difficult mealtimes can have a detrimental impact upon both child and parental psychological wellbeing. Since parents have a great influence over what, when, and how much food is offered, feeding difficulties may be preventable by better parental awareness. The aim of this review is to describe how parental factors contribute to the development of common feeding problems, and to discuss the merits of existing interventions aimed at parents/primary caregivers to improve child-feeding and children's eating behaviour. The potential for different technologies to be harnessed in order to deliver interventions in new ways will also be discussed. © 2012 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a highly sensitive, high Q-factor, label free and selective glucose sensor by using excessively tilted fiber grating (Ex-TFG) inscribed in the thin-cladding optical fiber (TCOF). Glucose oxidase (GOD) was covalently immobilized on optical fiber surface and the effectiveness of GOD immobilization was investigated by the fluorescence microscopy and highly accurate spectral interrogation method. In contrast to the long period grating (LPG) and optical fiber (OF) surface Plasmon resonance (SPR) based glucose sensors, the Ex-TFG configuration has merits of nearly independent cross sensitivity of the environmental temperature, simple fabrication method (no noble metal deposition or cladding etching) and high detection accuracy (or Q-factor). Our experimental results have shown that Ex-TFG in TCOF based sensor has a reliable and fast detection for the glucose concentration as low as 0.1~2.5mg/ml and a high sensitivity of ~1.514nm·(mg/ml)−1, which the detection accuracy is ~0.2857nm−1 at pH 5.2, and the limit of detection (LOD) is 0.013~0.02mg/ml at the pH range of 5.2~7.4 by using an optical spectrum analyzer with a resolution of 0.02nm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feasibility studies of industrial projects consist of multiple analyses carried out sequentially. This is time consuming and each analysis screens out alternatives based solely on the merits of that analysis. In cross-country petroleum pipeline project selection, market analysis determines throughput requirement and supply and demand points. Technical analysis identifies technological options and alternatives for pipe-line routes. Economic and financial analysis derive the least-cost option. The impact assessment addresses environmental issues. The impact assessment often suggests alternative sites, routes, technologies, and/or implementation methodology, necessitating revision of technical and financial analysis. This report suggests an integrated approach to feasibility analysis presented as a case application of a cross-country petroleum pipeline project in India.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines options for high capacity all optical networks. Specifically optical time division multiplexed (OTDM) networks based on electro-optic modulators are investigated experimentally, whilst comparisons with alternative approaches are carried out. It is intended that the thesis will form the basis of comparison between optical time division multiplexed networks and the more mature approach of wavelength division multiplexed networks. Following an introduction to optical networking concepts, the required component technologies are discussed. In particular various optical pulse sources are described with the demanding restrictions of optical multiplexing in mind. This is followed by a discussion of the construction of multiplexers and demultiplexers, including favoured techniques for high speed clock recovery. Theoretical treatments of the performance of Mach Zehnder and electroabsorption modulators support the design criteria that are established for the construction of simple optical time division multiplexed systems. Having established appropriate end terminals for an optical network, the thesis examines transmission issues associated with high speed RZ data signals. Propagation of RZ signals over both installed (standard fibre) and newly commissioned fibre routes are considered in turn. In the case of standard fibre systems, the use of dispersion compensation is summarised, and the application of mid span spectral inversion experimentally investigated. For green field sites, soliton like propagation of high speed data signals is demonstrated. In this case the particular restrictions of high speed soliton systems are discussed and experimentally investigated, namely the increasing impact of timing jitter and the downward pressure on repeater spacings due to the constraint of the average soliton model. These issues are each addressed through investigations of active soliton control for OTDM systems and through investigations of novel fibre types respectively. Finally the particularly remarkable networking potential of optical time division multiplexed systems is established, and infinite node cascadability using soliton control is demonstrated. A final comparison of the various technologies for optical multiplexing is presented in the conclusions, where the relative merits of the technologies for optical networking emerges as the key differentiator between technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We summarize the various strands of research on peripheral vision and relate them to theories of form perception. After a historical overview, we describe quantifications of the cortical magnification hypothesis, including an extension of Schwartz's cortical mapping function. The merits of this concept are considered across a wide range of psychophysical tasks, followed by a discussion of its limitations and the need for non-spatial scaling. We also review the eccentricity dependence of other low-level functions including reaction time, temporal resolution, and spatial summation, as well as perimetric methods. A central topic is then the recognition of characters in peripheral vision, both at low and high levels of contrast, and the impact of surrounding contours known as crowding. We demonstrate how Bouma's law, specifying the critical distance for the onset of crowding, can be stated in terms of the retinocortical mapping. The recognition of more complex stimuli, like textures, faces, and scenes, reveals a substantial impact of mid-level vision and cognitive factors. We further consider eccentricity-dependent limitations of learning, both at the level of perceptual learning and pattern category learning. Generic limitations of extrafoveal vision are observed for the latter in categorization tasks involving multiple stimulus classes. Finally, models of peripheral form vision are discussed. We report that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed. Taken together, the limitations of cognitive processing in peripheral vision appear to be as significant as those imposed on low-level functions and by way of crowding.