934 resultados para Scientific criticism
Resumo:
Erskine, Toni, 'Qualifying Cosmopolitanism? Solidarity, Criticism, and Michael Walzer's 'View from the Cave'', International Politics (2007) 44(1) pp.125-149 RAE2008
Resumo:
This thesis investigates the extent and range of the ocular vocabulary and themes employed by the playwright Thomas Middleton in context with early modern scientific, medical, and moral-philosophical writing on vision. More specifically, this thesis concerns Middleton’s revelation of the substance or essence of outward forms through mimesis. This paradoxical stance implies Middleton’s use of an illusory (theatrical) art form to explore hidden truths. This can be related to the early modern belief in the imagination (or fantasy) as chief mediator between the corporeal and spiritual worlds as well as to a reformed belief in the power of signs to indicate divine truth. This thesis identifies striking parallels between Middleton’s policy of social diagnosis and cure and an increased preoccupation with knowledge of interior man which culminates in Robert Burton’s Anatomy of Melancholy of 1621. All of these texts seek a cure for diseased internal sense faculties (such as fantasy and will) which cause the raging passions to destroy the individual. The purpose of this thesis is to demonstrate how Middleton takes a similar ‘mental-medicinal’ approach which investigates the idols created by the imagination before ‘purging’ the same and restoring order (Corneanu and Vermeir 184). The idea of infection incurred through the eyes which are fixed on vice (or error) has moral, religious, and political implications and discovery of corruption involves stripping away the illusions of false appearances to reveal the truth within whereby disease and disorder can be cured and restored. Finally, Middleton’s use of theatrical fantasy to detect the idols of the diseased imagination can be read as a Paracelsian, rather than Galenic, form of medicine whereby like is ‘joined with their like’ (Bostocke C7r) to restore health.
Resumo:
This PhD thesis investigates the potential use of science communication models to engage a broader swathe of actors in decision making in relation to scientific and technological innovation in order to address possible democratic deficits in science and technology policy-making. A four-pronged research approach has been employed to examine different representations of the public(s) and different modes of engagement. The first case study investigates whether patient-groups could represent an alternative needs-driven approach to biomedical and health sciences R & D. This is followed by enquiry into the potential for Science Shops to represent a bottom-up approach to promote research and development of local relevance. The barriers and opportunities for the involvement of scientific researchers in science communication are next investigated via a national survey which is comparable to a similar survey conducted in the UK. The final case study investigates to what extent opposition or support regarding nanotechnology (as an emerging technology) is reflected amongst the YouTube user community and the findings are considered in the context of how support or opposition to new or emerging technologies can be addressed using conflict resolution based approaches to manage potential conflict trajectories. The research indicates that the majority of communication exercises of relevance to science policy and planning take the form of a one-way flow of information with little or no facility for public feedback. This thesis proposes that a more bottom-up approach to research and technology would help broaden acceptability and accountability for decisions made relating to new or existing technological trajectories. This approach could be better integrated with and complementary to government, institutional, e.g. university, and research funding agencies activities and help ensure that public needs and issues are better addressed directly by the research community. Such approaches could also facilitate empowerment of societal stakeholders regarding scientific literacy and agenda-setting. One-way information relays could be adapted to facilitate feedback from representative groups e.g. Non-governmental organisations or Civil Society Organisations (such as patient groups) in order to enhance the functioning and socio-economic relevance of knowledge-based societies to the betterment of human livelihoods.
Resumo:
Many textual scholars will be aware that the title of the present thesis has been composed in a conscious revisionary relation to Tim William Machan’s influential Textual Criticism and Middle English Texts. (Tim William Machan, Textual Criticism and Middle English Texts (Charlottesville, 1994)). The primary subjects of Machan’s study are works written in English between the fourteenth and sixteenth centuries, the latter part of the period conventionally labelled Middle English. In contrast, the works with which I am primarily concerned are those written by scholars of Old and Middle Irish in the nineteenth, twentieth and twenty-first centuries. Where Machan aims to articulate the textual and cultural factors that characterise Middle English works as Middle English, the purposes of this thesis are (a) to identify the underlying ideological and epistemological perspectives which have informed much of the way in which medieval Irish documents and texts are rendered into modern editions, and (b) to begin to place the editorial theory and methodology of medieval Irish studies within the broader context of Biblical, medieval and modern textual criticism. Hence, the title is Textual Criticism and Medieval Irish Studies, rather than Textual Criticism and Medieval Irish Texts
Resumo:
BACKGROUND: The ability to write clearly and effectively is of central importance to the scientific enterprise. Encouraged by the success of simulation environments in other biomedical sciences, we developed WriteSim TCExam, an open-source, Web-based, textual simulation environment for teaching effective writing techniques to novice researchers. We shortlisted and modified an existing open source application - TCExam to serve as a textual simulation environment. After testing usability internally in our team, we conducted formal field usability studies with novice researchers. These were followed by formal surveys with researchers fitting the role of administrators and users (novice researchers) RESULTS: The development process was guided by feedback from usability tests within our research team. Online surveys and formal studies, involving members of the Research on Research group and selected novice researchers, show that the application is user-friendly. Additionally it has been used to train 25 novice researchers in scientific writing to date and has generated encouraging results. CONCLUSION: WriteSim TCExam is the first Web-based, open-source textual simulation environment designed to complement traditional scientific writing instruction. While initial reviews by students and educators have been positive, a formal study is needed to measure its benefits in comparison to standard instructional methods.
Resumo:
BACKGROUND: Writing plays a central role in the communication of scientific ideas and is therefore a key aspect in researcher education, ultimately determining the success and long-term sustainability of their careers. Despite the growing popularity of e-learning, we are not aware of any existing study comparing on-line vs. traditional classroom-based methods for teaching scientific writing. METHODS: Forty eight participants from a medical, nursing and physiotherapy background from US and Brazil were randomly assigned to two groups (n = 24 per group): An on-line writing workshop group (on-line group), in which participants used virtual communication, google docs and standard writing templates, and a standard writing guidance training (standard group) where participants received standard instruction without the aid of virtual communication and writing templates. Two outcomes, manuscript quality was assessed using the scores obtained in Six subgroup analysis scale as the primary outcome measure, and satisfaction scores with Likert scale were evaluated. To control for observer variability, inter-observer reliability was assessed using Fleiss's kappa. A post-hoc analysis comparing rates of communication between mentors and participants was performed. Nonparametric tests were used to assess intervention efficacy. RESULTS: Excellent inter-observer reliability among three reviewers was found, with an Intraclass Correlation Coefficient (ICC) agreement = 0.931882 and ICC consistency = 0.932485. On-line group had better overall manuscript quality (p = 0.0017, SSQSavg score 75.3 +/- 14.21, ranging from 37 to 94) compared to the standard group (47.27 +/- 14.64, ranging from 20 to 72). Participant satisfaction was higher in the on-line group (4.3 +/- 0.73) compared to the standard group (3.09 +/- 1.11) (p = 0.001). The standard group also had fewer communication events compared to the on-line group (0.91 +/- 0.81 vs. 2.05 +/- 1.23; p = 0.0219). CONCLUSION: Our protocol for on-line scientific writing instruction is better than standard face-to-face instruction in terms of writing quality and student satisfaction. Future studies should evaluate the protocol efficacy in larger longitudinal cohorts involving participants from different languages.
Resumo:
Presentation at the Controlling Dangerous Pathogens Project Regional Workshop on Dual-Use Research, Teresopolis, Brazil
Resumo:
"Facts and Fictions: Feminist Literary Criticism and Cultural Critique, 1968-2012" is a critical history of the unfolding of feminist literary study in the US academy. It contributes to current scholarly efforts to revisit the 1970s by reconsidering often-repeated narratives about the critical naivety of feminist literary criticism in its initial articulation. As the story now goes, many of the most prominent feminist thinkers of the period engaged in unsophisticated literary analysis by conflating lived social reality with textual representation when they read works of literature as documentary evidence of real life. As a result, the work of these "bad critics," particularly Kate Millett and Andrea Dworkin, has not been fully accounted for in literary critical terms.
This dissertation returns to Dworkin and Millett's work to argue for a different history of feminist literary criticism. Rather than dismiss their work for its conflation of fact and fiction, I pay attention to the complexity at the heart of it, yielding a new perspective on the history and persistence of the struggle to use literary texts for feminist political ends. Dworkin and Millett established the centrality of reality and representation to the feminist canon debates of "the long 1970s," the sex wars of the 1980s, and the more recent feminist turn to memoir. I read these productive periods in feminist literary criticism from 1968 to 2012 through their varied commitment to literary works.
Chapter One begins with Millett, who de-aestheticized male-authored texts to treat patriarchal literature in relation to culture and ideology. Her mode of literary interpretation was so far afield from the established methods of New Criticism that she was not understood as a literary critic. She was repudiated in the feminist literary criticism that followed her and sought sympathetic methods for reading women's writing. In that decade, the subject of Chapter Two, feminist literary critics began to judge texts on the basis of their ability to accurately depict the reality of women's experiences.
Their vision of the relationship between life and fiction shaped arguments about pornography during the sex wars of the 1980s, the subject of Chapter Three. In this context, Dworkin was feminism's "bad critic." I focus on the literary critical elements of Dworkin's theories of pornographic representation and align her with Millett as a miscategorized literary critic. In the decades following the sex wars, many of the key feminist literary critics of the founding generation (including Dworkin, Jane Gallop, Carolyn Heilbrun, and Millett) wrote memoirs that recounted, largely in experiential terms, the history this dissertation examines. Chapter Four considers the story these memoirists told about the rise and fall of feminist literary criticism. I close with an epilogue on the place of literature in a feminist critical enterprise that has shifted toward privileging theory.
Resumo:
p.131-140
Resumo:
p.131-140
Resumo:
The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.
Resumo:
This paper describes the use of a blackboard architecture for building a hybrid case based reasoning (CBR) system. The Smartfire fire field modelling package has been built using this architecture and includes a CBR component. It allows the integration into the system of qualitative spatial reasoning knowledge from domain experts. The system can be used for the automatic set-up of fire field models. This enables fire safety practitioners who are not expert in modelling techniques to use a fire modelling tool. The paper discusses the integrating powers of the architecture, which is based on a common knowledge representation comprising a metric diagram and place vocabulary and mechanisms for adaptation and conflict resolution built on the Blackboard.
Resumo:
The parallelization of real-world compute intensive Fortran application codes is generally not a trivial task. If the time to complete the parallelization is to be significantly reduced then an environment is needed that will assist the programmer in the various tasks of code parallelization. In this paper the authors present a code parallelization environment where a number of tools that address the main tasks such as code parallelization, debugging and optimization are available. The ParaWise and CAPO parallelization tools are discussed which enable the near automatic parallelization of real-world scientific application codes for shared and distributed memory-based parallel systems. As user involvement in the parallelization process can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform nearly automatic relative debugging of a program that has been parallelized using the tools. A high quality interprocedural dependence analysis as well as user-tool interaction are also highlighted and are vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of benchmark and real-world application codes parallelized are presented and show the benefits of using the environment