980 resultados para Practical geopolitical discourse
Resumo:
This paper contributes to critical policy research by theorising one aspect of policy enactment, the meaning making work of a cohort of mid-level policy actors. Specifically, we propose that Basil Bernstein’s work on the structuring of pedagogic discourse, in particular, the concept of recontextualisation, may add to understandings of the policy work of interpretation and translation. Recontextualisation refers to the relational processes of selecting and moving knowledge from one context to another, as well as to the distinctive re-organisation of knowledge as an instructional and regulative or moral discourse. Processes of recontextualisation necessitate an analysis of power and control relations, and therefore add to the Foucauldian theorisations of power that currently dominate the critical policy literature. A process of code elaboration (decoding and recoding) takes place in various recontextualising agencies, responsible for the production of professional development materials, teaching guidelines and curriculum resources. We propose that mid-level policy actors are crucial to the work of policy interpretation and translation because they are engaged in elaborating the condensed codes of policy texts to an imagined logic of teachers’ practical work. To illustrate our theoretical points we draw on data; collected for an Australian research project on the accounts of mid-level policy actors responsible for the interpretation of child protection and safety policies for staff in Queensland schools.
Resumo:
This paper surveys the practical benefits and drawbacks of several identity-based encryption schemes based on bilinear pairings. After providing some background on identity-based cryptography, we classify the known constructions into a handful of general approaches. We then describe efficient and fully secure IBE and IBKEM instantiations of each approach, with reducibility to practice as the main design parameter. Finally, we catalogue the strengths and weaknesses of each construction according to a few theoretical and many applied comparison criteria.
Resumo:
Digital literacy poses a particular challenge to the research-led university. Although these universities are often at the forefront of introducing digital literacy initiatives—such as e-learning platforms, technological infrastructure, and digital repositories—these applications of digital literacy tend to be more instrumental or functional than critical or creative. Certainly, this clash of cultures between the instrumental/functional and the critical/analytical is at the heart of debates over the uses of digital literacy in higher education. However, this simple equation of political forces with instrumentality and the corresponding equation of the university with a tradition of reflective thought that brings criticism to bear on instrumentality elide the fact that this conflict is more deeply rooted within the academy. This essay argues that, in fact, much of the resistance to critical uses of digital literacy comes from within the institution of the university itself. That is, the university is bound up in a scriptural economy that prioritises the printed word and that reinforces its power by way of a normative, political, and spatialised academic discourse. It is this print-based scriptural economy—in which this essay must acknowledge its own complicity—that a critical approach to digital literacy threatens to disrupt or lay bare.
Resumo:
Working primarily within the natural landscape, this practice-led research project explored connections between the artist's visual and perceptual experience of a journey or place while simultaneously emphasizing the capacity for digital media to create a perceptual dissonance. By exploring concepts of time, viewpoint, duration of sequences and the manipulation of traditional constructs of stop-frame animation, the practical work created a cognitive awareness of the elements of the journey through optical sensations. The work allowed an opportunity to reflect on the nature of visual experience and its mediation through images. The project recontextualized the selected mediums of still photography, animation and projection within contemporary display modes of multiple screen installations by analysing relationships between the experienced and the perceived. The resulting works added to current discourse on the interstices between still and moving imagery in a digital world.
Resumo:
The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.
Resumo:
One of the core values to be applied by a body reviewing the ethics of human research is justice. The inclusion of justice as a requirement in the ethical review of human research is relatively recent and its utility had been largely unexamined until debates arose about the conduct of international biomedical research in the late 1990s. The subsequent amendment of authoritative documents in ways that appeared to shift the meaning of conceptions of justice generated a deal of controversy. Another difficulty has been that both the theory and the substance of justice that are applied by researchers or reviewers can be frequently seen to be subjective. Both the concept of justice – whether distributive or commutative - and what counts as a just distribution or exchange – are given different weight and meanings by different people. In this paper, the origins and more recent debates about the requirement to consider justice as a criterion in the ethical review of human research are traced, relevant conceptions of justice are distinguished and the manner in which they can be applied meaningfully in the ethical review all human research is identified. The way that these concepts are articulated in, and the intent and function of, specific paragraphs of the National Statement on Ethical Conduct in Human Research (NHMRC, ARC, UA, 2007) (National Statement) is explained. The National Statement identifies a number of issues that should be considered when a human research ethics committee is reviewing the justice aspects of an application. It also provides guidance to researchers as to how they can show that there is a fair distribution of burdens and benefits in the participant experience and the research outcomes. It also provides practical guidance to researchers on how to think through issues of justice so that they can demonstrate that the design of their research projects meets this ethical requirement is also provided
Resumo:
Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.
Resumo:
In order to continue to maintain public trust and confidence in human research, participants must be treated with respect. Researchers and Human Research Ethics Committee members need to be aware that modern considerations of this value include: the need for a valid consenting process, the protection of participants who have their capacity for consent compromised; the promotion of dignity for participants; and the effects that human research may have on cultures and communities. This paper explains the prominence of respect as a value when considering the ethics of human research and provides practical advice for both researchers and Human Research Ethics Committee members in developing respectful research practices.
Resumo:
I am interested in the psychology of entrepreneurship—how entrepreneurs think, decide to act, and feel. I recently realized that while my publications in academic journals have implications for entrepreneurs, those implications have remained relatively hidden in the text of the articles and hidden in articles published in journals largely inaccessible to those involved in the entrepreneurial process. This book is designed to bring the practical implications of my research to the forefront. I decided to take a different approach with this book and not write it for a publisher. I did this because I wanted the ideas to be freely available: (1) I wanted those interested in practical advice for entrepreneurs to be able to freely download, distribute, and use this information (I only ask that the content be properly cited), (2) I wanted to release the chapters independently and make chapters available as they are finished, and; (3) I wanted this work to be a dialogue rather than a one-way conversation—I hope readers email me feedback (positive and negative) so that I can use this information to revise the book. In producing the journal articles underpinning this book, I have had the pleasure of working with many talented and wonderful colleagues—they are cited at the end of each chapter. I hope you find some of the advice in this book useful.
Resumo:
As the first academically rigorous interrogation of the generation of performance within the global frame of the motion capture volume, this research presents a historical contextualisation and develops and tests a set of first principles through an original series of theoretically informed, practical exercises to guide those working in the emergent space of performance capture. It contributes a new understanding of the framing of performance in The Omniscient Frame, and initiates and positions performance capture as a new and distinct interdisciplinary discourse in the fields of theatre, animation, performance studies and film.
Resumo:
FROM KCWS 2010 Ch airs and Summit Proceeding Ed ito rs ‘Knowledge’ is a resource, which relies on the past for a better future. In the 21st century, more than ever before, cities around the world depend on the knowledge of their citizens, their institutions and their firms and enterprises. The knowledge image, the human competence and the reputation of their public and private institutions and corporations profiles a city. It attracts investment, qualified labour and professionals, as well as students and researchers. And it creates local life spaces and professional milieus, which offer the quality of life to the citizens that are seeking to cope with the challenges of modern life in a competitive world. Integrating knowledge-based development in urban strategies and policies, beyond the provision of schools and locations for higher education, has become a new ambitious arena of city politics. Coming from theory to practice, and bringing together the manifold knowledge stakeholders in a city and preparing joint visions for the knowledge city is a new challenge for city managers, urban planners and leaders of the civic society . It requires visionary power, creativity, holistic thinking, the willingness to cooperate with all groups of the local civil society, and the capability to moderate communication processes to overcome conflicts and to develop joint action for a sustainable future. This timely Melbourne 2010 – The Third Knowledge City World Summit makes an important reminder that ‘knowledge’ is the key notion in the 21st Century development. Considering this notion, the summit aims to shed light on the multi-faceted dimensions and various scales of building the ‘knowledge city’ and on ‘knowledge-based development’ paradigms. At this summit, the theoretical and practical maturing of knowledge-based development paradigms will be advanced through the interplay between the world’s leading academic’s theories and the practical models and strategies of practitioners’ and policy makers’ drawn from around the world. As chairs of The Melbourne 2010 Summit, we have compiled this summit proceeding in order to disseminate the knowledge generated and shared in Melbourne with the wider research, governance, and practice communities. The papers in the proceedings reflect the broad range of contributions to the summit. They report on recent developments in planning and managing knowledge cities and ICT infrastructure, they assess the role of knowledge institutions in regional innovation systems and of the intellectual capital of cities and regions; they describe the evolution of knowledge-based approaches to urban development in differing cultural environments; they finally bridge the discourse on the knowledge city to other urban development paradigms such as the creative city, the ubiquitous city or the compact city. The diversity of papers presented shows how different scholars from planning cultures around the world interpret the knowledge dimension in urban and regional development. All papers of this proceeding have gone through a double-blind peer review process and been reviewed by our summit editorial review and advisory board members. We cordially thank the members of the Summit Proceeding Editorial Review and Advisory Board for their diligent work in the review of the papers. We hope the papers in this proceeding will inspire and make a significant contribution to the research, governance, and practice circles.
Resumo:
Plasma Nanoscience is a multidisciplinary research field which aims to elucidate the specific roles, purposes, and benefits of the ionized gas environment in assembling and processing nanoscale objects in natural, laboratory and technological situations. Compared to neutral gas-based routes, in low-temperature weakly-ionized plasmas there is another level of complexity related to the necessity of creating and sustaining a suitable degree of ionization and a much larger number of species generated in the gas phase. The thinner the nanotubes, the stronger is the quantum confinement of electrons and more unique size-dependent quantum effects can emerge. Furthermore, due to a very high mobility of electrons, the surfaces are at a negative potential compared to the plasma bulk. Therefore, there are non-uniform electric fields within the plasma sheath. The electric field lines start in the plasma bulk and converge to the sharp tips of the developing one-dimensional nanostructures.