901 resultados para whether disallowance for lack of particularity constitutes obvious error
Resumo:
Recent discussion of the knowledge-based economy draws increasingly attention to the role that the creation and management of knowledge plays in economic development. Development of human capital, the principal mechanism for knowledge creation and management, becomes a central issue for policy-makers and practitioners at the regional, as well as national, level. Facing competition both within and across nations, regional policy-makers view human capital development as a key to strengthening the positions of their economies in the global market. Against this background, the aim of this study is to go some way towards answering the question of whether, and how, investment in education and vocational training at regional level provides these territorial units with comparative advantages. The study reviews literature in economics and economic geography on economic growth (Chapter 2). In growth model literature, human capital has gained increased recognition as a key production factor along with physical capital and labour. Although leaving technical progress as an exogenous factor, neoclassical Solow-Swan models have improved their estimates through the inclusion of human capital. In contrast, endogenous growth models place investment in research at centre stage in accounting for technical progress. As a result, they often focus upon research workers, who embody high-order human capital, as a key variable in their framework. An issue of discussion is how human capital facilitates economic growth: is it the level of its stock or its accumulation that influences the rate of growth? In addition, these economic models are criticised in economic geography literature for their failure to consider spatial aspects of economic development, and particularly for their lack of attention to tacit knowledge and urban environments that facilitate the exchange of such knowledge. Our empirical analysis of European regions (Chapter 3) shows that investment by individuals in human capital formation has distinct patterns. Those regions with a higher level of investment in tertiary education tend to have a larger concentration of information and communication technology (ICT) sectors (including provision of ICT services and manufacture of ICT devices and equipment) and research functions. Not surprisingly, regions with major metropolitan areas where higher education institutions are located show a high enrolment rate for tertiary education, suggesting a possible link to the demand from high-order corporate functions located there. Furthermore, the rate of human capital development (at the level of vocational type of upper secondary education) appears to have significant association with the level of entrepreneurship in emerging industries such as ICT-related services and ICT manufacturing, whereas such association is not found with traditional manufacturing industries. In general, a high level of investment by individuals in tertiary education is found in those regions that accommodate high-tech industries and high-order corporate functions such as research and development (R&D). These functions are supported through the urban infrastructure and public science base, facilitating exchange of tacit knowledge. They also enjoy a low unemployment rate. However, the existing stock of human and physical capital in those regions with a high level of urban infrastructure does not lead to a high rate of economic growth. Our empirical analysis demonstrates that the rate of economic growth is determined by the accumulation of human and physical capital, not by level of their existing stocks. We found no significant effects of scale that would favour those regions with a larger stock of human capital. The primary policy implication of our study is that, in order to facilitate economic growth, education and training need to supply human capital at a faster pace than simply replenishing it as it disappears from the labour market. Given the significant impact of high-order human capital (such as business R&D staff in our case study) as well as the increasingly fast pace of technological change that makes human capital obsolete, a concerted effort needs to be made to facilitate its continuous development.
Resumo:
Objective: To explore views of patients with type 2 diabetes about self monitoring of blood glucose over time. Design: Longitudinal, qualitative study. Setting: Primary and secondary care settings across Lothian, Scotland. Participants: 18 patients with type 2 diabetes. Main outcome measures: Results from repeat in-depth interviews with patients over four years after clinical diagnosis. Results: Analysis revealed three main themes - the role of health professionals, interpreting readings and managing high values, and the ongoing role of blood glucose self monitoring. Self monitoring decreased over time, and health professionals' behaviour seemed crucial in this: participants interpreted doctors' focus on levels of haemoglobin A1c, and lack of perceived interest in meter readings, as indicating that self monitoring was not worth continuing. Some participants saw readings as a proxy measure of good and bad behaviour - with women especially, chastising themselves when readings were high. Some participants continued to find readings difficult to interpret, with uncertainty about how to respond to high readings. Reassurance and habit were key reasons for continuing. There was little indication that participants were using self monitoring to effect and maintain behaviour change. Conclusions: Clinical uncertainty about the efficacy and role of blood glucose self monitoring in patients with type 2 diabetes is mirrored in patients' own accounts. Patients tended not to act on their self monitoring results, in part because of a lack of education about the appropriate response to readings. Health professionals should be explicit about whether and when such patients should self monitor and how they should interpret and act upon the results, especially high readings.
Resumo:
PURPOSE: To determine whether letter sequences and/or lens-presentation order should be randomized when measuring defocus curves and to assess the most appropriate criterion for calculating the subjective amplitude of accommodation (AoA) from defocus curves. SETTING: Eye Clinic, School of Life & Health Sciences, Aston University, Birmingham, United Kingdom. METHODS: Defocus curves (from +3.00 diopters [D] to -3.00 D in 0.50 D steps) for 6 possible combinations of randomized or nonrandomized letter sequences and/or lens-presentation order were measured in a random order in 20 presbyopic subjects. Subjective AoA was calculated from the defocus curves by curve fitting using various published criteria, and each was correlated to subjective push-up AoA. Objective AoA was measured for comparison of blur tolerance and pupil size. RESULTS: Randomization of lens-presentation order and/or letter sequences, or lack of, did not affect the measured defocus curves (P>.05, analysis of variance). The range of defocus that maintains highest achievable visual acuity (allowing for variability of repeated measurement) was better correlated to (r = 0.84) and agreed best with ( 0.50 D) subjective push-up AoA than any other relative or absolute acuity criterion used in previous studies. CONCLUSIONS: Nonrandomized letters and lens presentation on their own did not affect subjective AoA measured by defocus curves, although their combination should be avoided. Quantification of subjective AoA from defocus curves should be standardized to the range of defocus that maintains the best achievable visual acuity.
Resumo:
Whilst research on work group diversity has proliferated in recent years, relatively little attention has been paid to the precise definition of diversity or its measurement. One of the few studies to do so is Harrison and Klein’s (2007) typology, which defined three types of diversity – separation, variety and disparity – and suggested possible indices with which they should be measured. However, their typology is limited by its association of diversity types with variable measurement, by a lack of clarity over the meaning of variety, and by the absence of a clear guidance about which diversity index should be employed. In this thesis I develop an extended version of the typology, including four diversity types (separation, range, spread and disparity), and propose specific indices to be used for each type of diversity with each variable type (ratio, interval, ordinal and nominal). Indices are chosen or derived from first principles based on the precise definition of the diversity type. I then test the usefulness of these indices in predicting outcomes of diversity compared with other indices, using both an extensive simulated data set (to estimate the effects of mis-specification of diversity type or index) and eight real data sets (to examine whether the proposed indices produce the strongest relationships with hypothesised outcomes). The analyses lead to the conclusion that the indices proposed in the typology are at least as good as, and usually better than, other indices in terms of both measuring effect sizes and power to find significant results, and thus provide evidence to support the typology. Implications for theory and methodology are discussed.
Resumo:
Objectives: Pharmacists play an important role in the review of local hospital guidelines. British Thoracic Society (BTS) guidelines for the management of patients with community-acquired pneumonia (CAP) were updated in 2001, and it is important that individual hospital recommendations are based upon this national guidance. The aim of this study was to identify UK Chief Pharmacists' awareness of these updated guidelines one year after their publication. Secondary aims were to identify whether pharmacists had subsequently initiated revision of institutional CAP guidelines, and what roles different professional staff had performed in this process. Method: A self-completion postal questionnaire was sent to the Chief Pharmacist (or their nominated staff) in 253 UK NHS hospitals in November 2002. This aimed to identify issues relating to their awareness of the 2001 BTS guidelines and subsequent revision of their hospital's guidelines. Results:188 questionnaires were returned (a response rate of 74%), of which 164 hospitals had local antibiotic prescribing guidelines. Respondents in 29% of these hospitals were unaware of the 2001 BTS publication and institutional guidelines had been revised in only 51% of hospitals where the Chief Pharmacist was purportedly aware of the new BTS guidance. Generally, more staff types were involved in revising guidelines than initiating revision. Conclusions:Variability existed in both Chief Pharmacists' awareness of new national guidance and subsequent review processes operating in individual hospitals. A lack of proactive reaction to new national guidance was identified in some hospitals, and it is hoped that the establishment of specialist "infectious diseases pharmacists" will facilitate the review of institutional antibiotic prescribing guidelines in the future. © Springer 2005.
Resumo:
There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.
Resumo:
The research investigates the past, present and potential future role of Information Specialists (ISps) in process oriented companies. It tests the proposition that ISps in companies that have undertaken formal process reengineering exercises are likely to become more proactive and more business oriented (as opposed to technically oriented) than they had previously been when their organisations were organised along traditional, functional lines. A review of existing literature in the area of Business Process Reengineering and Information Management reveals a lack of consensus amongst researchers concerning the appropriate role for ISps during and after BPR. Opinion is divided as to whether IS professionals should reactively support BPR or whether IT/IS developments should be driving these initiatives. A questionnaire based ‘Descriptive Survey’ with 60 respondents is used as a first stage of primary data gathering. This is followed by follow-up interviews with 20 of the participating organisations to gather further information on their experiences. The final stage of data collection consists of further in-depth interview with four case study companies to provide an even richer picture of their experiences. The results of the questionnaire are analysed and displayed in the form of simple means, frequencies and bar graphs. The ‘NU-DIST’ computer based discourse analysis package was tried in relation to summarising the interview findings, but this proved cumbersome and a visual collation method is preferred. Overall, the researcher contends that the supposition outlined above is proven, and she concludes the research by suggesting the implications of these findings. In particular she offers a ‘Framework for Understanding and Action’ which is deemed to be relevant to both practitioners and future researchers.
Resumo:
The rationale for carrying out this research was to address the clear lack of knowledge surrounding the measurement of public hospital performance in Ireland. The objectives of this research were to develop a comprehensive model for measuring hospital performance and using this model to measure the performance of public acute hospitals in Ireland in 2007. Having assessed the advantages and disadvantages of various measurement models the Data Envelopment Analysis (DEA) model was chosen for this research. DEA was initiated by Charnes, Cooper and Rhodes in 1978 and further developed by Fare et al. (1983) and Banker et al. (1984). The method used to choose relevant inputs and outputs to be included in the model followed that adopted by Casu et al. (2005) which included the use of focus groups. The main conclusions of the research are threefold. Firstly, it is clear that each stakeholder group has differing opinions on what constitutes good performance. It is therefore imperative that any performance measurement model would be designed within parameters that are clearly understood by any intended audience. Secondly, there is a lack of publicly available qualitative information in Ireland that inhibits detailed analysis of hospital performance. Thirdly, based on available qualitative and quantitative data the results indicated a high level of efficiency among the public acute hospitals in Ireland in their staffing and non pay costs, averaging 98.5%. As DEA scores are sensitive to the number of input and output variables as well as the size of the sample it should be borne in mind that a high level of efficiency could be as a result of using DEA with too many variables compared to the number of hospitals. No hospital was deemed to be scale efficient in any of the models even though the average scale efficiency for all of the hospitals was relatively high at 90.3%. Arising from this research the main recommendations would be that information on medical outcomes, survival rates and patient satisfaction should be made publicly available in Ireland; that despite a high average efficiency level that many individual hospitals need to focus on improving their technical and scale efficiencies, and that performance measurement models should be developed that would include more qualitative data.
Resumo:
This article considers whether using leadership language may be one under-explored reason why there continues to be a significant lack of women at executive level. Do women make less of a linguistic impact in the boardroom than men? By analysing linguistic data from senior management meetings and follow-up interviews in seven multinational UK companies, we suggest that senior women and men use a very similar range of linguistic strategies to lead their teams except in one key respect. Women appear to monitor and regulate their use of language more than men, adjusting what they say in the light of their colleagues’ concerns and agendas. This use of ‘double-voiced discourse’ (Bakhtin, 1994/1963) enables women to survive in a male-dominated business world, but this can sometimes make their voice ‘harder to hear’. However, double-voiced discourse is also a form of linguistic expertise and potentially might offer women leaders a strategy for success.
Resumo:
Despite much anecdotal and oftentimes empirical evidence that black and ethnic minority employees do not feel integrated into organisational life and the implications of this lack of integration for their career progression, there is a dearth of research on the nature of the relationship black and ethnic minority employees have with their employing organisations. Additionally, research examining the relationship between diversity management and work outcomes has returned mixed findings. Scholars have attributed this to the lack of an empirically validated measure of workforce diversity management. Accordingly, I sought to address these gaps in the extant literature in a two-part study grounded in social exchange theory. In Study 1, I developed and validated a measure of workforce diversity management practices. Data obtained from a sample of ethnic minority employees from a cross section of organisations provided support for the validity of the scale. In Study 2, I proposed and tested a social-exchange-based model of the relationship between black and ethnic minority employees’ and their employing organisations, as well as assessed the implications of this relationship for their work outcomes. Specifically, I hypothesised: (i) perception of support for diversity, perception of overall justice, and developmental experiences (indicators of integration into organisational life) as mediators of the relationship between diversity management and social exchange with organisation; (ii) the moderating influence of diversity climate on the relationship between diversity management and these indicators of integration; and (iii) the work outcomes of social exchange with organisation defined in terms of career satisfaction, turnover intention and strain. SEM results provide support for most of the hypothesised relationships. The findings of the study contribute to the literature on workforce diversity management in a number of ways. First, the development and validation of a diversity management practice scale constitutes a first step in resolving the difficulty in operationalising and measuring the diversity management construct. Second, it explicates how and why diversity management practices influence a social exchange relationship with an employing organisation, and the implications of this relationship for the work outcomes of black and ethnic minority employees. My study’s focus on employee work outcomes is an important corrective to the predominant focus on organisational-level outcomes of diversity management. Lastly, by focusing on ethno-racial diversity my research complements the extant research on such workforce diversity indicators as age and gender.
Resumo:
Whether to assess the functionality of equipment or as a determinate for the accuracy of assays, reference standards are essential for the purposes of standardisation and validation. The ELISPOT assay, developed over thirty years ago, has emerged as a leading immunological assay in the development of novel vaccines for the assessment of efficacy. However, with its widespread use, there is a growing demand for a greater level of standardisation across different laboratories. One of the major difficulties in achieving this goal has been the lack of definitive reference standards. This is partly due to the ex vivo nature of the assay, which relies on cells being placed directly into the wells. Thus, the aim of this thesis was to produce an artificial reference standard using liposomes, for use within the assay. Liposomes are spherical bilayer vesicles with an enclosed aqueous compartment and therefore are models for biological membranes. Initial work examined pre-design considerations in order to produce an optimal formulation that would closely mimic the action of the cells ordinarily placed on the assay. Recognition of the structural differences between liposomes and cells led to the formulation of liposomes with increased density. This was achieved by using a synthesised cholesterol analogue. By incorporating this cholesterol analogue in liposomes, increased sedimentation rates were observed within the first few hours. The optimal liposome formulation from these studies was composed of 2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC), cholesterol (Chol) and brominated cholesterol (Brchol) at a 16:4:12 µMol ratio, based on a significantly higher (p<0.01) sedimentation (as determined by a percentage transmission of 59 ± 5.9 % compared to the control formulation at 29 ± 12 % after four hours). By considering a range of liposome formulations ‘proof of principle’ for using liposomes as ELISPOT reference standards was shown; recombinant IFN? cytokine was successfully entrapped within vesicles of different lipid compositions, which were able to promote spot formation within the ELISPOT assay. Using optimised liposome formulations composed of phosphatidylcholine with or without cholesterol (16 µMol total lipid) further development was undertaken to produce an optimised, scalable protocol for the production of liposomes as reference standards. A linear increase in spot number by the manipulation of cytokine concentration and/or lipid concentrations was not possible, potentially due to the saturation that occurred within the base of wells. Investigations into storage of the formulations demonstrated the feasibility of freezing and lyophilisation with disaccharide cryoprotectants, but also highlighted the need for further protocol optimisation to achieve a robust reference standard upon storage. Finally, the transfer of small-scale production to a medium lab-scale batch (40 mL) demonstrated this was feasible within the laboratory using the optimised protocol.
Resumo:
This paper follows on from that presented at the last BEST conference in Edinburgh (Higson & Hamilton-Jones(2004)). At that stage, the authors outlined their initial research work with students studying on the yearlong International Foundation programmes. at three local FE Colleges allied to Aston University. The research (funded by the University's Teaching Quality Enhancement Funds (TQEF) involved questionnaires and interviews with staff and students (the latter all from overseas). it aimed to identify ways to improve the learning experience of students on the International Foundation programmes, to aid their smooth transition to full degree programmes in Business and Management and to improve the progression rates of such students while studying at Aston. The initial research findings were used to design a module for those students' progress to degree programmes in Aston Business School. This paper discusses how the module was designed, its content and the assessment methods used to help determine whether students are achieving the learning outcomes. The basic principle was to identify areas of study where the International Foundation Programme students needed help in order to improve their learning styles to assist them with the requirements of other modules that they would be studying during their time at Aston. Particular emphasis was put on the need to develop active learners who were not disadvantaged by their lack of awareness of UK culture and society and who were as comfortable performing written work under examination conditions or presenting orally as their UK counterparts. An additional aim was to prepare these students for the placement year which was a compulsory part of their degree. The module, therefore, comprises a range of inputs for a number of staff, a company visit, weekly reflective learning leading to Personal Development Plan (PDP) work, formal examinations, presentations, group work •and individual case studies. This paper also reports on the initial reaction of the students and tutors to the new learning experience with currently 30 participants undertaking the module. Provisional findings suggest that the International Foundation programme has prepared the students well for degree-level work and that as a group of international students they are much more analytical and, after studying the module interactive than their counterparts who have come directly onto Aston degrees. It has shown them still to be quite passive learners, comfortable with facts and lecture-style learning environments, but less comfortable when asked to use their own initiatives. Continuing progress needs to be made in terms of encouraging them to develop a reflective approach to learning with the students taking some time to feel comfortable with an analytical approach to learning. In addition, im account of the students' reactions to having to work through a formal (PDP) and the results of their first assessments will be provided. At Aston, this work is being used as a pilot to recognise good practice with regards to work with further groups of international students. it is hoped that this would have widespread application across the sector.
Resumo:
Introduction For a significant period of time (the late 1950s--1980s), a lack of capital freedom was a major obstacle to the progress of the internal market project. The free movements of goods, persons and services were achieved, and developed, primarily through the case law of the Court of Justice of the European Union (CJEU). On the other hand, the Court played a (self-imposed) limited role in the development of the free movement of capital. It was through a progressive series of legislation that the freedom was finally achieved. John Usher has noted that the consequence of this is that ‘free movement of capital thus became the only Treaty “freedom” to be achieved in the manner envisaged in the Treaty’. For this reason, the relationship of the Court and legislature in this area is of particular importance in the broader context of the internal market. The rest of this chapter is split into four sections and will attempt to describe (and account for) the differing relationships between the legislature and the judiciary during the different stages of capital liberalisation. Section 2 will deal with the situation under the original Treaty of Rome. Section 3 will examine a single legislative intervention: Directive 88/361. It was this intervention that contained the obligation for Member States to fully liberalise capital movements. It is therefore the most important contribution to the completion of the internal market in the capital sphere. An examination will be made of whether the interpretation of the Directive demonstrates a changed (or changing attitude) of the Court towards the EU legislature. Section 4 will examine the changes brought about by the Treaty on European Union in 1993. It was at Maastricht that the Member States finally introduced into the Treaty framework an absolute obligation to liberalise capital movements. Finally, Section 5 will consider the Treaty of Lisbon and the possibility of future interventions by the legislature. By looking at the patterns that run through the different parts, this chapter will attempt to engage with the question of whether the approaches were products of their historical context, or whether they can be applied to other areas within the capital movement sphere.
Resumo:
Background - Vaccine development in the post-genomic era often begins with the in silico screening of genome information, with the most probable protective antigens being predicted rather than requiring causative microorganisms to be grown. Despite the obvious advantages of this approach – such as speed and cost efficiency – its success remains dependent on the accuracy of antigen prediction. Most approaches use sequence alignment to identify antigens. This is problematic for several reasons. Some proteins lack obvious sequence similarity, although they may share similar structures and biological properties. The antigenicity of a sequence may be encoded in a subtle and recondite manner not amendable to direct identification by sequence alignment. The discovery of truly novel antigens will be frustrated by their lack of similarity to antigens of known provenance. To overcome the limitations of alignment-dependent methods, we propose a new alignment-free approach for antigen prediction, which is based on auto cross covariance (ACC) transformation of protein sequences into uniform vectors of principal amino acid properties. Results - Bacterial, viral and tumour protein datasets were used to derive models for prediction of whole protein antigenicity. Every set consisted of 100 known antigens and 100 non-antigens. The derived models were tested by internal leave-one-out cross-validation and external validation using test sets. An additional five training sets for each class of antigens were used to test the stability of the discrimination between antigens and non-antigens. The models performed well in both validations showing prediction accuracy of 70% to 89%. The models were implemented in a server, which we call VaxiJen. Conclusion - VaxiJen is the first server for alignment-independent prediction of protective antigens. It was developed to allow antigen classification solely based on the physicochemical properties of proteins without recourse to sequence alignment. The server can be used on its own or in combination with alignment-based prediction methods.
Resumo:
Auditory Training (AT) describes a regimen of varied listening exercises designed to improve an individual’s ability to perceive speech. The theory of AT is based on brain plasticity (the capacity of neurones in the central auditory system to alter their structure and function) in response to auditory stimulation. The practice of repeatedly listening to the speech sounds included in AT exercises is believed to drive the development of more efficient neuronal pathways, thereby improving auditory processing and speech discrimination. This critical review aims to assess whether auditory training can improve speech discrimination in adults with mild-moderate SNHL. The majority of patients attending Audiology services are adults with presbyacusis and it is therefore important to evaluate evidence of any treatment effect of AT in aural rehabilitation. Ideally this review would seek to appraise evidence of neurophysiological effects of AT so as to verify whether it does induce change in the CAS. However, due to the absence of such studies on this particular patient group, the outcome measure of speech discrimination, as a behavioural indicator of treatment effect is used instead. A review of available research was used to inform an argument for or against using AT in rehabilitative clinical practice. Six studies were identified and although the preliminary evidence indicates an improvement gained from a range of AT paradigms, the treatment effect size was modest and there remains a lack of large-sample RCTs. Future investigation into the efficacy of AT needs to employ neurophysiological studies using auditory evoked potentials in hearing-impaired adults in order to explore effects of AT on the CAS.