837 resultados para Errors and blunders, Literary.
Resumo:
This thesis contributes to the understanding of the processes involved in the formation and transformation of identities. It achieves this goal by establishing the critical importance of ‘background’ and ‘liminality’ in the shaping of identity. Drawing mainly from the work of cultural anthropology and philosophical hermeneutics a theoretical framework is constructed from which transformative experiences can be analysed. The particular experience at the heart of this study is the phenomenon of conversion and the dynamics involved in the construction of that process. Establishing the axial age as the horizon from which the process of conversion emerged will be the main theme of the first part of the study. Identifying the ‘birth’ of conversion allows a deeper understanding of the historical dynamics that make up the process. From these fundamental dynamics a theoretical framework is constructed in order to analyse the conversion process. Applying this theoretical framework to a number of case-studies will be the central focus of this study. The transformative experiences of Saint Augustine, the fourteenth century nun Margaret Ebner, the communist revolutionary Karl Marx and the literary figure of Arthur Koestler will provide the material onto which the theoretical framework can be applied. A synthesis of the Judaic religious and the Greek philosophical traditions will be the main findings for the shaping of Augustine’s conversion experience. The dissolution of political order coupled with the institutionalisation of the conversion process will illuminate the mystical experiences of Margaret Ebner at a time when empathetic conversion reached its fullest expression. The final case-studies examine two modern ‘conversions’ that seem to have an ideological rather than a religious basis to them. On closer examination it will be found that the German tradition of Biblical Criticism played a most influential role in the ‘conversion’ of Marx and mythology the best medium to understand the experiences of Koestler. The main ideas emerging from this study highlight the fluidity of identity and the important role of ‘background’ in its transformation. The theoretical framework, as constructed for this study, is found to be a useful methodological tool that can offer insights into experiences, such as conversion, that otherwise would remain hidden from our enquiries.
Resumo:
Background: Hospital clinicians are increasingly expected to practice evidence-based medicine (EBM) in order to minimize medical errors and ensure quality patient care, but experience obstacles to information-seeking. The introduction of a Clinical Informationist (CI) is explored as a possible solution. Aims: This paper investigates the self-perceived information needs, behaviour and skill levels of clinicians in two Irish public hospitals. It also explores clinicians perceptions and attitudes to the introduction of a CI into their clinical teams. Methods: A questionnaire survey approach was utilised for this study, with 22 clinicians in two hospitals. Data analysis was conducted using descriptive statistics. Results: Analysis showed that clinicians experience diverse information needs for patient care, and that barriers such as time constraints and insufficient access to resources hinder their information-seeking. Findings also showed that clinicians struggle to fit information-seeking into their working day, regularly seeking to answer patient-related queries outside of working hours. Attitudes towards the concept of a CI were predominantly positive. Conclusion: This paper highlights the factors that characterise and limit hospital clinicians information-seeking, and suggests the CI as a potentially useful addition to the clinical team, to help them to resolve their information needs for patient care.
Resumo:
There have been few genuine success stories about industrial use of formal methods. Perhaps the best known and most celebrated is the use of Z by IBM (in collaboration with Oxford University's Programming Research Group) during the development of CICS/ESA (version 3.1). This work was rewarded with the prestigious Queen's Award for Technological Achievement in 1992 and is especially notable for two reasons: 1) because it is a commercial, rather than safety- or security-critical, system and 2) because the claims made about the effectiveness of Z are quantitative as well as qualitative. The most widely publicized claims are: less than half the normal number of customer-reported errors and a 9% savings in the total development costs of the release. This paper provides an independent assessment of the effectiveness of using Z on CICS based on the set of public domain documents. Using this evidence, we believe that the case study was important and valuable, but that the quantitative claims have not been substantiated. The intellectual arguments and rationale for formal methods are attractive, but their widespread commercial use is ultimately dependent upon more convincing quantitative demonstrations of effectiveness. Despite the pioneering efforts of IBM and PRG, there is still a need for rigorous, measurement-based case studies to assess when and how the methods are most effective. We describe how future similar case studies could be improved so that the results are more rigorous and conclusive.
Resumo:
The paper first considers the role of Jungian ideas in relation to academic disciplines and to literary studies in particular. Jung is a significant resource in negotiating developments in literary theory because of his characteristic treatment of the ‘other’. The paper then looks at The Lion, the Witch and the Wardrobe (1950) by C.S. Lewis whose own construction of archetypes is very close to Jung’s. By drawing upon new post-Jungian work from Jerome Bernstein’s Living in the Borderland (2005), the novel is revealed to be intimately concerned with narratives of trauma and of origin. Indeed, a Jungian and post-Jungian approach is able to situate the text both within nature and in the historical traumas of war as well as the personal traumas of subjectivity. Where Bernstein connects his work to the postcolonial ethos of the modern Navajo shaman, this new weaving of literary and cultural theory points to the residue of shamanism within the arts of the West. [From the Publisher]
Resumo:
High-resolution UCLES/AAT spectra are presented for nine B-type supergiants in the SMC, chosen on the basis that they may show varying amounts of nuclear-synthetically processed material mixed to their surface. These spectra have been analysed using a new grid of approximately 12 000 non-LTE line blanketed tlusty model atmospheres to estimate atmospheric parameters and chemical composition. The abundance estimates for O, Mg and Si are in excellent agreement with those deduced from other studies, whilst the low estimate for C may reflect the use of the C II doublet at 4267 Å. The N estimates are approximately an order of magnitude greater than those found in unevolved B-type stars or H II regions but are consistent with the other estimates in AB-type supergiants. These results have been combined with results from a unified model atmosphere analysis of UVES/VLT spectra of B-type supergiants (Trundle et al. 2004, A&A, 417, 217) to discuss the evolutionary status of these objects. For two stars that are in common with those discussed by Trundle et al., we have undertaken a careful comparison in order to try to understand the relative importance of the different uncertainties present in such analyses, including observational errors and the use of static or unified models. We find that even for these relatively luminous supergiants, tlusty models yield atmospheric parameters and chemical compositions similar to those deduced from the unified code fastwind.
Resumo:
The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.
Resumo:
There is a growing literature examining the impact of research on informing policy, and of research and policy on practice. Research and policy do not have the same types of impact on practice but can be evaluated using similar approaches. Sometimes the literature provides a platform for methodological debate but mostly it is concerned with how research can link to improvements in the process and outcomes of education, how it can promote innovative policies and practice, and how it may be successfully disseminated. Whether research-informed or research-based, policy and its implementation is often assessed on such 'hard' indicators of impact as changes in the number of students gaining five or more A to C grades in national examinations or a percentage fall in the number of exclusions in inner city schools. Such measures are necessarily crude, with large samples smoothing out errors and disguising instances of significant success or failure. Even when 'measurable' in such a fashion, however, the impact of any educational change or intervention may require a period of years to become observable. This paper considers circumstances in which short-term change may be implausible or difficult to observe. It explores how impact is currently theorized and researched and promotes the concept of 'soft' indicators of impact in circumstances in which the pursuit of conventional quantitative and qualitative evidence is rendered impractical within a reasonable cost and timeframe. Such indicators are characterized by their avowedly subjective, anecdotal and impressionistic provenance and have particular importance in the context of complex community education issues where the assessment of any impact often faces considerable problems of access. These indicators include the testimonies of those on whom the research intervention or policy focuses (for example, students, adult learners), the formative effects that are often reported (for example, by head teachers, community leaders) and media coverage. The collation and convergence of a wide variety of soft indicators (Where there is smoke …) is argued to offer a credible means of identifying subtle processes that are often neglected as evidence of potential and actual impact (… there is fire).
Resumo:
From perspective of structure synthesis, certain special geometric constraints, such as joint axes intersecting at one point or perpendicular to each other, are necessary in realizing the end-effector motion of kinematically decoupled parallel manipulators (PMs) along individual motion axes. These requirements are difficult to achieve in the actual system due to assembly errors and manufacturing tolerances. Those errors that violate the geometric constraint requirements are termed “constraint errors”. The constraint errors usually are more troublesome than other manipulator errors because the decoupled motion characteristics of the manipulator may no longer exist and the decoupled kinematic models will be rendered useless due to these constraint errors. Therefore, identification and prevention of these constraint errors in initial design and manufacturing stage are of great significance. In this article, three basic types of constraint errors are identified, and an approach to evaluate the effects of constraint errors on decoupling characteristics of PMs is proposed. This approach is illustrated by a 6-DOF PM with decoupled translation and rotation. The results show that the proposed evaluation method is effective to guide design and assembly.
Resumo:
As critics have noted, Antillean literature has developed in tandem with a strong (self-) critical and theoretical body of work. The various attempts to theorize Antillean identity (négritude, antillanité, créolité) have been controversial and divisive, and the literary scene has been characterized as explosive, incestuous and self-referential. Yet writers aligned with, or opposed to, a given theory often have superior visibility. Meanwhile writers who claim to operate outside the boundaries of theory, such as Maryse Condé, are often canny theoretical operators who, from prestigious academic or cultural positions, manipulate readers’ responses and their own self-image through criticism. While recent polemics have helped to raise the critical stock of the islands generally, they have particularly enhanced the cultural capital of Chamoiseau and Condé, whose literary antagonism is in fact mutually sustaining. Both writers, through a strong awareness of (and contribution to) the critical field in which their work is read, position themselves as canonical authors.
Resumo:
In three studies we looked at two typical misconceptions of probability: the representativeness heuristic, and the equiprobability bias. The literature on statistics education predicts that some typical errors and biases (e.g., the equiprobability bias) increase with education, whereas others decrease. This is in contrast with reasoning theorists’ prediction who propose that education reduces misconceptions in general. They also predict that students with higher cognitive ability and higher need for cognition are less susceptible to biases. In Experiments 1 and 2 we found that the equiprobability bias increased with statistics education, and it was negatively correlated with students’ cognitive abilities. The representativeness heuristic was mostly unaffected by education, and it was also unrelated to cognitive abilities. In Experiment 3 we demonstrated through an instruction manipulation (by asking participants to think logically vs. rely on their intuitions) that the reason for these differences was that these biases originated in different cognitive processes.
Resumo:
Romanticism and Blackwood's Magazine is inspired by the ongoing critical fascination with Blackwood's Edinburgh Magazine, and the burgeoning recognition of its centrality to the Romantic age. Though the magazine itself was published continuously for well over a century and a half, this volume concentrates specifically on those years when William Blackwood was at the helm, beginning with his founding of the magazine in 1817 and closing with his death in 1834. These were the years when, as Samuel Taylor Coleridge put it in 1832, Blackwood's reigned as 'an unprecedented Phenomenon in the world of letters.' The magazine placed itself at the centre of the emerging mass media, commented decisively on all the major political and cultural issues that shaped the Romantic movement, and published some of the leading writers of the day, including Coleridge, Thomas De Quincey, John Galt, Felicia Hemans, James Hogg, Walter Scott, and Mary Shelley.
'This much-needed volume reminds us not only why Blackwood's was the most influential periodical publication of the time, but also how its writers, writings, and critical agendas continue to shape so many of the scholarly concerns of Romantic studies in the twenty-first century.' - Charles Mahoney, Associate Professor, University of Connecticut, USA
List of Illustrations
Acknowledgements
Abbreviations
Notes on Contributors
'A character so various, and yet so indisputably its own': A Passage to Blackwood's Edinburgh Magazine; R.Morrison & D.S.Roberts
PART I: BLACKWOOD'S AND THE PERIODICAL PRESS
Beginning Blackwood's: The Right Mix of Dulce and Ùtile; P.Flynn
John Gibson Lockhart and Blackwood's: Shaping the Romantic Periodical Press; T.Richardson
From Gluttony to Justified Sinning: Confessional Writing in Blackwood's and the London Magazine; D.Higgins
Camaraderie and Conflict: De Quincey and Wilson on Enemy Lines; R.Morrison
Selling Blackwood's Magazine, 1817-1834; D.Finkelstein
PART II: BLACKWOOD'S CULTURE AND CRITICISM
Blackwood's 'Personalities'; T.Mole
Communal Reception, Mary Shelley, and the 'Blackwood's School' of Criticism; N.Mason
Blackwoodian Allusion and the Culture of Miscellaneity; D.Stewart
Blackwood's Edinburgh Magazine in the Scientific Culture of Early Nineteenth-Century Edinburgh; W.Christie
The Art and Science of Politics in Blackwood's Edinburgh Magazine, c. 1817-1841; D.Kelly
Prosing Poetry: Blackwood's and Generic Transposition, 1820-1840; J.Camlot
PART III: BLACKWOOD'S FICTIONS
Blackwood's and the Boundaries of the Short Story; T.Killick
The Edinburgh of Blackwood's Edinburgh Magazine and James Hogg's Fiction; G.Hughes
'The Taste for Violence in Blackwood's Magazine'; M.Schoenfield
PART IV: BLACKWOOD'S AT HOME
John Wilson and Regency Authorship; R.Cronin
John Wilson and Sport; J.Strachan
William Maginn and the Blackwood's 'Preface' of 1826; D.E.Latané, Jr.
All Work and All Play: Felicia Hemans's Edinburgh Noctes; N.Sweet
PART V: BLACKWOOD'S ABROAD
Imagining India in Early Blackwood's; D.S.Roberts
Tales of the Colonies: Blackwood's, Provincialism, and British Interests Abroad; A.Jarrells
Selected Bibliography
Index
ROBERT MORRISON is Queen's National Scholar at Queen's University, Kingston, Ontario, Canada. His book, The English Opium-Eater: A Biography of Thomas De Quincey was a finalist for the James Tait Black Prize. He has edited writings by Jane Austen, Leigh Hunt, Thomas De Quincey, and John Polidori.
DANIEL SANJIV ROBERTS is Reader in English at Queen's University Belfast, UK. His publications include a monograph, Revisionary Gleam: De Quincey, Coleridge, and the High Romantic Argument (2000), and major critical editions of Thomas De Quincey's Autobiographic Sketches and Robert Southey's The Curse of Kehama; the latter was cited as a Distinguished Scholarly Edition by the MLA. He is currently working on an edition of Charles Johnstone's novel The History of Arsaces, Prince of Betlis for the Early Irish Fiction series.
Resumo:
Objectives: Study objectives were to investigate the prevalence and causes of prescribing errors amongst foundation doctors (i.e. junior doctors in their first (F1) or second (F2) year of post-graduate training), describe their knowledge and experience of prescribing errors, and explore their self-efficacy (i.e. confidence) in prescribing.
Method: A three-part mixed-methods design was used, comprising: prospective observational study; semi-structured interviews and cross-sectional survey. All doctors prescribing in eight purposively selected hospitals in Scotland participated. All foundation doctors throughout Scotland participated in the survey. The number of prescribing errors per patient, doctor, ward and hospital, perceived causes of errors and a measure of doctors’ self-efficacy were established.
Results: 4710 patient charts and 44,726 prescribed medicines were reviewed. There were 3364 errors, affecting 1700 (36.1%) charts (overall error rate: 7.5%; F1:7.4%; F2:8.6%; consultants:6.3%). Higher error rates were associated with : teaching hospitals (p,0.001), surgical (p = ,0.001) or mixed wards (0.008) rather thanmedical ward, higher patient turnover wards (p,0.001), a greater number of prescribed medicines (p,0.001) and the months December and June (p,0.001). One hundred errors were discussed in 40 interviews. Error causation was multi-factorial; work environment and team factors were particularly noted. Of 548 completed questionnaires (national response rate of 35.4%), 508 (92.7% of respondents) reported errors, most of which (328 (64.6%) did not reach the patient. Pressure from other staff, workload and interruptions were cited as the main causes of errors. Foundation year 2 doctors reported greater confidence than year 1 doctors in deciding the most appropriate medication regimen.
Conclusions: Prescribing errors are frequent and of complex causation. Foundation doctors made more errors than other doctors, but undertook the majority of prescribing, making them a key target for intervention. Contributing causes included work environment, team, task, individual and patient factors. Further work is needed to develop and assess interventions that address these.
Resumo:
This study aims to evaluate the use of Varian radiotherapy dynamic treatment log (DynaLog) files to verify IMRT plan delivery as part of a routine quality assurance procedure. Delivery accuracy in terms of machine performance was quantified by multileaf collimator (MLC) position errors and fluence delivery accuracy for patients receiving intensity modulated radiation therapy (IMRT) treatment. The relationship between machine performance and plan complexity, quantified by the modulation complexity score (MCS) was also investigated. Actual MLC positions and delivered fraction of monitor units (MU), recorded every 50 ms during IMRT delivery, were extracted from the DynaLog files. The planned MLC positions and fractional MU were taken from the record and verify system MLC control file. Planned and delivered beam data were compared to determine leaf position errors with and without the overshoot effect. Analysis was also performed on planned and actual fluence maps reconstructed from the MLC control file and delivered treatment log files respectively. This analysis was performed for all treatment fractions for 5 prostate, 5 prostate and pelvic node (PPN) and 5 head and neck (H&N) IMRT plans, totalling 82 IMRT fields in ∼5500 DynaLog files. The root mean square (RMS) leaf position errors without the overshoot effect were 0.09, 0.26, 0.19 mm for the prostate, PPN and H&N plans respectively, which increased to 0.30, 0.39 and 0.30 mm when the overshoot effect was considered. Average errors were not affected by the overshoot effect and were 0.05, 0.13 and 0.17 mm for prostate, PPN and H&N plans respectively. The percentage of pixels passing fluence map gamma analysis at 3%/3 mm was 99.94 ± 0.25%, which reduced to 91.62 ± 11.39% at 1%/1 mm criterion. Leaf position errors, but not gamma passing rate, were directly related to plan complexity as determined by the MCS. Site specific confidence intervals for average leaf position errors were set at -0.03-0.12 mm for prostate and -0.02-0.28 mm for more complex PPN and H&N plans. For all treatment sites confidence intervals for RMS errors with the overshoot was set at 0-0.50 mm and for the percentage of pixels passing a gamma analysis at 1%/1 mm a confidence interval of 68.83% was set also for all treatment sites. This work demonstrates the successful implementation of treatment log files to validate IMRT deliveries and how dynamic log files can diagnose delivery errors not possible with phantom based QC. Machine performance was found to be directly related to plan complexity but this is not the dominant determinant of delivery accuracy.
Resumo:
The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p < 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p < 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R > 0.57, p < 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process.