941 resultados para Structured illumination


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to examine the role of three strategies - organisational, business and information system – in post implementation of technological innovations. The findings reported in the paper are that improvements in operational performance can only be achieved by aligning technological innovation effectiveness with operational effectiveness. Design/methodology/approach – A combination of qualitative and quantitative methods was used to apply a two-stage methodological approach. Unstructured and semi structured interviews, based on the findings of the literature, were used to identify key factors used in the survey instrument design. Confirmatory factor analysis (CFA) was used to examine structural relationships between the set of observed variables and the set of continuous latent variables. Findings – Initial findings suggest that organisations looking for improvements in operational performance through adoption of technological innovations need to align with operational strategies of the firm. Impact of operational effectiveness and technological innovation effectiveness are related directly and significantly to improved operational performance. Perception of increase of operational effectiveness is positively and significantly correlated with improved operational performance. The findings suggest that technological innovation effectiveness is also positively correlated with improved operational performance. However, the study found that there is no direct influence of strategiesorganisational, business and information systems (IS) - on improvement of operational performance. Improved operational performance is the result of interactions between the implementation of strategies and related outcomes of both technological innovation and operational effectiveness. Practical implications – Some organisations are using technological innovations such as enterprise information systems to innovate through improvements in operational performance. However, they often focus strategically only on effectiveness of technological innovation or on operational effectiveness. Such a focus will be detrimental in the long-term of the enterprise. This research demonstrated that it is not possible to achieve maximum returns through technological innovations as dimensions of operational effectiveness need to be aligned with technological innovations to improve their operational performance. Originality/value – No single technological innovation implementation can deliver a sustained competitive advantage; rather, an advantage is obtained through the capacity of an organisation to exploit technological innovations’ functionality on a continuous basis. To achieve sustainable results, technology strategy must be aligned with organisational and operational strategies. This research proposes the key performance objectives and dimensions that organisations should focus to achieve a strategic alignment. Research limitations/implications – The principal limitation of this study is that the findings are based on investigation of small sample size. There is a need to explore the appropriateness of influence of scale prior to generalizing the results of this study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Flickering stimuli increase the metabolic demand of the retina,making it a sensitive perimetric stimulus to the early onset of retinal disease. We determine whether flickering stimuli are a sensitive indicator of vision deficits resulting from to acute, mild systemic hypoxia when compared to standard static perimetry. Methods: Static and flicker visual perimetry were performed in 14 healthy young participants while breathing 12% oxygen (hypoxia) under photopic illumination. The hypoxia visual field data were compared with the field data measured during normoxia. Absolute sensitivities (in dB) were analysed in seven concentric rings at 1°, 3°, 6°, 10°, 15°, 22° and 30° eccentricities as well as mean defect (MD) and pattern defect (PD) were calculated. Preliminary data are reported for mesopic light levels. Results: Under photopic illumination, flicker and static visual field sensitivities at all eccentricities were not significantly different between hypoxia and normoxia conditions. The mean defect and pattern defect were not significantly different for either test between the two oxygenation conditions. Conclusion: Although flicker stimulation increases cellular metabolism, flicker photopic visual field impairment is not detected during mild hypoxia. These findings contrast with electrophysiological flicker tests in young participants that show impairment at photopic illumination during the same levels of mild hypoxia. Potential mechanisms contributing to the difference between the visual fields and electrophysiological flicker tests including variability in perimetric data, neuronal adaptation and vascular autoregulation, are considered. The data have implications for the use of visual perimetry in the detection of ischaemic/hypoxic retinal disorders under photopic and mesopic light levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently discovered intrinsically photosensitive melanopsin retinal ganglion cells contribute to the maintenance of pupil diameter, recovery and post-illumination components of the pupillary light reflex and provide the primary environmental light input to the suprachiasmatic nucleus for photoentrainment of the circadian rhythm. This review summarises recent progress in understanding intrinsically photosensitive ganglion cell histology and physiological properties in the context of their contribution to the pupillary and circadian functions and introduces a clinical framework for using the pupillary light reflex to evaluate inner retinal (intrinsically photosensitive melanopsin ganglion cell) and outer retinal (rod and cone photoreceptor) function in the detection of retinal eye disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Before 2001, most Africans immigrating to Australia were white South Africans and Zimbabweans who arrived as economic and family-reunion migrants (Cox, Cooper & Adepoju, 1999). Black African communities are a more recent addition to the Australian landscape, with most entering Australia as refugees after 2001. African refugees are a particularly disadvantaged immigrant group, which the Department of Immigration and Multicultural Affairs (in the Community Relations Commission of New South Wales, 2006) suggests require high levels of settlement support (p.23). Decision makers and settlement service providers need to have settlement data on the communities so that they can be effective in planning, budgeting and delivering support where it is most needed. Settlement data are also useful for determining the challenges that these communities face in trying to establish themselves in resettlement. There has been no verification of existing secondary data sources, however, or previous formal study of African refugee settlement geography in Southeast Queensland. This research addresses the knowledge gap by using a mixed-method approach to identify and describe the distribution and population size of eight African communities in Southeast Queensland, examine secondary migration patterns in these communities and assess the relationship between these geographic features and housing, a critical factor in successful settlement. Significant discrepancies exist between the primary data gathered in the study and existing secondary data relating to population size and distribution of the communities. Results also reveal a tension between the socio-cultural forces and the housing and economic imperatives driving secondary migration in the communities, and a general lack of engagement by African refugees with structured support networks. These findings have a wide range of implications for policy and for groups that provide settlement support to these communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background/objectives The provision of the patient bed-bath is a fundamental nursing care activity yet few quantitative data and no qualitative data are available on registered nurses’ (RNs) clinical practice in this domain in the intensive care unit (ICU). The aim of this study was to describe ICU RNs current practice with respect to the timing, frequency and duration of the patient bed-bath and the cleansing and emollient agents used. Methods The study utilised a two-phase sequential explanatory mixed method design. Phase one used a questionnaire to survey RNs and phase two employed semi-structured focus group (FG) interviews with RNs. Data was collected over 28 days across four Australian metropolitan ICUs. Ethical approval was granted from the relevant hospital and university human research ethics committees. RNs were asked to complete a questionnaire following each episode of care (i.e. bed-bath) and then to attend one of three FG interviews: RNs with less than 2 years ICU experience; RNs with 2–5 years ICU experience; and RNs with greater than 5 years ICU experience. Results During the 28-day study period the four ICUs had 77.25 beds open. In phase one a total of 539 questionnaires were returned, representing 30.5% of episodes of patient bed-baths (based on 1767 bed occupancy and one bed-bath per patient per day). In 349 bed-bath episodes 54.7% patients were mechanically ventilated. The bed-bath was given between 02.00 and 06.00 h in 161 episodes (30%), took 15–30 min to complete (n = 195, 36.2%) and was completed within the last 8 h in 304 episodes (56.8%). Cleansing agents used were predominantly pH balanced soap or liquid soap and water (n = 379, 71%) in comparison to chlorhexidine impregnated sponges/cloths (n = 86, 16.1%) or other agents such as pre-packaged washcloths (n = 65, 12.2%). In 347 episodes (64.4%) emollients were not applied after the bed-bath. In phase two 12 FGs were conducted (three FGs at each ICU) with a total of 42 RN participants. Thematic analysis of FG transcripts across the three levels of RN ICU experience highlighted a transition of patient hygiene practice philosophy from shades of grey – falling in line for inexperienced clinicians to experienced clinicians concrete beliefs about patient bed-bath needs. Conclusions This study identified variation in process and products used in patient hygiene practices in four ICUs. Further study to improve patient outcomes is required to determine the appropriate timing of patient hygiene activities and cleansing agents used to improve skin integrity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the problems involving infrastructure delivery have become more complex and contentious, there has been an acknowledgement that these problems cannot be resolved by any one body working alone. This understanding has driven multi-sectoral collaboration and has led to an expansion of the set of actors, including stakeholders, who are now involved in delivery of infrastructure projects and services. However, more needs to be understood about how to include stakeholders in these processes and ways of developing the requisite combination of stakeholders to achieve effective outcomes. This thesis draws on stakeholder theory and governance network theory to obtain insights into how three multi-level networks within the Roads Alliance in Queensland engage with stakeholders in the delivery of complex and sensitive infrastructure services and projects. New knowledge about stakeholders will be obtained by testing a model of Stakeholder Salience and Engagement which combines and extends the stakeholder identification and salience theory, ladder of stakeholder management and engagement and the model of stakeholder engagement and moral treatment of stakeholders. By applying this model, the broad research question: “Who or what decides how stakeholders are engaged by governance networks delivering public outcomes?” will be addressed. The case studies will test a theoretical model of stakeholder salience and engagement which links strategic decisions about stakeholder salience with the quality and quantity of engagement strategies for engaging different types of stakeholders. A multiple embedded case study design has been selected as the overall approach to explore, describe, explain and evaluate how stakeholder engagement occurs in three governance networks delivering road infrastructure in Queensland. The research design also incorporates a four stage approach to data collection: observations, stakeholder analysis, telephone survey questionnaire and semi-structured interviews. The outcomes of this research will contribute to and extend stakeholder theory by showing how stakeholder salience impacts on decisions about the types of engagement processes implemented. Governance network theory will be extended by showing how governance networks interact with stakeholders through the concepts of stakeholder salience and engagement. From a practical perspective this research will provide governance networks with an indication of how to optimise engagement with different types of stakeholders. 2

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to expose the impact of the shortage of senior academics,particularly professors, in Australian accounting schools, to relate the way one school addressed this shortage through a mentoring scheme, and to challenge existing institutional arrangements.----------- Design/methodology/approach: This is a contextualised qualitative case study of a mentoring scheme conducted in an Australian accounting school. Data collected from semi-structured interviews, personal reflections and from Australian university web sites are interpreted theoretically using the metaphor of a “green drought”.---------- Findings: The mentoring scheme achieved some notable successes, but raised many issues and challenges. Mentoring is a multifaceted investment in vocational endeavour and intellectual infrastructure, which will not occur unless creative means are developed over the long term to overcome current and future shortages of academic mentors.---------- Research limitations/implications: This is a qualitative case study, which, therefore, limits its generalisability. However, its contextualisation enables insights to be applied to the wider academic environment. ----------Practical implications: In the Australian and global academic environment, as accounting professors retire in greater numbers, new and creative ways of mentoring will need to be devised. The challenge will be to address longer term issues of academic sustainability, and not just to focus on short-term academic outcomes.---------- Originality/value: A mentoring scheme based on a collegial networking model of mentoring is presented as a means of enhancing academic endeavour through a creative short-term solution to a shortage of accounting professors. The paper exemplifies the theorising power of metaphor in a qualitative study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social enterprises are diverse in their mission, business structures and industry orientations. Like all businesses, social enterprises face a range of strategic and operational challenges and utilize a range of strategies to access resources in support of their venture. This exploratory study examined the strategic management issues faced by Australian social enterprises and the ways in which they respond to these. The research was based on a comprehensive literature review and semi-structured interviews with 11 representatives of eight social enterprises based in Victoria and Queensland. The sample included mature social enterprises and those within two years of start-up. In addition to the research report, the outputs of the project include a series of six short documentaries, which are available on YouTube at http://www.youtube.com/user/SocialEnterpriseQUT#p/u. The research reported on here suggests that social enterprises are sophisticated in utilizing processes of network bricolage (Baker et al. 2003) to mobilize resources in support of their goals. Access to network resources can be both enabling and constraining as social enterprises mature. In terms of the use of formal business planning strategies, all participating social enterprises had utilized these either at the outset or the point of maturation of their business operations. These planning activities were used to support internal operations, to provide a mechanism for managing collective entrepreneurship, and to communicate to external stakeholders about the legitimacy and performance of the social enterprises. Further research is required to assess the impacts of such planning activities, and the ways in which they are used over time. Business structures and governance arrangements varied amongst participating enterprises according to: mission and values; capital needs; and the experiences and culture of founding organizations and individuals. In different ways, participants indicated that business structures and governance arrangements are important ways of conferring legitimacy on social enterprise, by signifying responsible business practice and strong social purpose to both external and internal stakeholders. Almost all participants in the study described ongoing tensions in balancing social purpose and business objectives. It is not clear, however, whether these tensions were problematic (in the sense of eroding mission or business opportunities) or productive (in the sense of strengthening mission and business practices through iterative processes of reflection and action). Longitudinal research on the ways in which social enterprises negotiate mission fulfillment and business sustainability would enhance our knowledge in this area. Finally, despite growing emphasis on measuring social impact amongst institutions, including governments and philanthropy, that influence the operating environment of social enterprise, relatively little priority was placed on this activity. The participants in our study noted the complexities of effectively measuring social impact, as well as the operational difficulties of undertaking such measurement within the day to day realities of running small to medium businesses. It is clear that impact measurement remains a vexed issue for a number of our respondents. This study suggests that both the value and practicality of social impact measurement require further debate and critically informed evidence, if impact measurement is to benefit social enterprises and the communities they serve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

User-Web interactions have emerged as an important area of research in the field of information science. In this study, we investigate the effects of users’ cognitive styles on their Web navigational styles and information processing strategies. We report results from the analyses of 594 minutes recorded Web search sessions of 18 participants engaged in 54 scenario-based search tasks. We use questionnaires, cognitive style test, Web session logs and think-aloud as the data collection instruments. We classify users’ cognitive styles as verbalisers and imagers based on Riding’s (1991) Cognitive Style Analysis test. Two classifications of navigational styles and three categories of information processing strategies are identified. Our study findings show that there exist relationships between users’ cognitive style, and their navigational styles and information processing strategies. Verbal users seem to display sporadic navigational styles, and adopt a scanning strategy to understand the content of the search result page, while imagery users follow a structured navigational style and reading approach. We develop a matrix and a model that depicts the relationships between users’ cognitive styles, and their navigational style and information processing strategies. We discuss how the findings from this study could help search engine designers to provide an adaptive navigation support to users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What is a record producer? There is a degree of mystery and uncertainty about just what goes on behind the studio door. Some producers are seen as Svengali-like figures manipulating artists into mass consumer product. Producers are sometimes seen as mere technicians whose job is simply to set up a few microphones and press the record button. Close examination of the recording process will show how far this is from a complete picture. Artists are special—they come with an inspiration, and a talent, but also with a variety of complications, and in many ways a recording studio can seem the least likely place for creative expression and for an affective performance to happen. The task of the record producer is to engage with these artists and their songs and turn these potentials into form through the technology of the recording studio. The purpose of the exercise is to disseminate this fixed form to an imagined audience—generally in the hope that this audience will prove to be real. Finding an audience is the role of the record company. A record producer must also engage with the commercial expectations of the interests that underwrite a recording. This dissertation considers three fields of interest in the recording process: the performer and the song; the technology of the recording context; and the commercial ambitions of the record company—and positions the record producer as a nexus at the interface of all three. The author reports his structured recollection of five recordings, with three different artists, that all achieved substantial commercial success. The processes are considered from the author’s perspective as the record producer, and from inception of the project to completion of the recorded work. What were the processes of engagement? Do the actions reported conform to the template of nexus? This dissertation proposes that in all recordings the function of producer/nexus is present and necessary—it exists in the interaction of the artistry and the technology. The art of record production is to engage with these artists and the songs they bring and turn these potentials into form.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1990 European Community was taken by surprise, by the urgency of demands from the newly-elected Eastern European governments to become member countries. Those governments were honouring the mass social movement of the streets, the year before, demanding free elections and a liberal economic system associated with “Europe”. The mass movement had actually been accompanied by much activity within institutional politics, in Western Europe, the former “satellite” states, the Soviet Union and the United States, to set up new structures – with German reunification and an expanded EC as the centre-piece. This paper draws on the writer’s doctoral dissertation on mass media in the collapse of the Eastern bloc, focused on the Berlin Wall – documenting both public protests and institutional negotiations. For example the writer as a correspondent in Europe from that time, recounts interventions of the German Chancellor, Helmut Kohl, at a European summit in Paris nine days after the “Wall”, and separate negotiations with the French President, Francois Mitterrand -- on the reunification, and EU monetary union after 1992. Through such processes, the “European idea” would receive fresh impetus, though the EU which eventuated, came with many altered expectations. It is argued here that as a result of the shock of 1989, a “social” Europe can be seen emerging, as a shared experience of daily life -- especially among people born during the last two decades of European consolidation. The paper draws on the author’s major research, in four parts: (1) Field observation from the strategic vantage point of a news correspondent. This includes a treatment of evidence at the time, of the wishes and intentions of the mass public (including the unexpected drive to join the European Community), and those of governments, (e.g. thoughts of a “Tienanmen Square solution” in East Berlin, versus the non-intervention policies of the Soviet leader, Mikhail Gorbachev). (2) A review of coverage of the crisis of 1989 by major news media outlets, treated as a history of the process. (3) As a comparison, and a test of accuracy and analysis; a review of conventional histories of the crisis appearing a decade later.(4) A further review, and test, provided by journalists responsible for the coverage of the time, as reflection on practice – obtained from semi-structured interviews.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Teaching The Global Dimension (2007) is intended for primary and secondary teachers, pre-service teachers and educators interested in fostering global concerns in the education system. It aims at linking theory and practice and is structured as follows. Part 1, the global dimension, proposes an educational framework for understanding global concerns. Individual chapters in this section deal with some educational responses to global issues and the ways in which young people might become, in Hick’s terms, more “world-minded”. In the first two chapters, Hicks presents first, some educational responses to global issues that have emerged in recent decades, and second, an outline of the evolution of global education as a specific field. As with all the chapters in this book, most of the examples are drawn from the United Kingdom. Young people’s concerns, student teachers’ views and the teaching of controversial issues, comprise the other chapters in this section. Taken collectively, the chapters in Part 2 articulate the conceptual framework for developing, teaching and evaluating a global dimension across the curriculum. Individual chapters in this section, written by a range of authors, explore eight key concepts considered necessary to underpin appropriate learning experiences in the classroom. These are conflict, social justice, values and perceptions, sustainability, interdependence, human rights, diversity and citizenship. These chapters are engaging and well structured. Their common format consists of a succinct introduction, reference to positive action for change, and examples of recent effective classroom practice. Two chapters comprise the final section of this book and suggest different ways in which the global dimension can be achieved in the primary and the secondary classroom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Becoming a Teacher is structured in five very readable sections. The introductory section addresses the nature of teaching and the importance of developing a sense of purpose for teaching in a 21st century classroom. It also introduces some key concepts that are explored throughout the volume according to the particular chapter focus of each part. For example, the chapters in Part 2 explore aspects of student learning and the learning environment and focus on how students develop and learn, learner motivation, developing self esteem and learning environments. The concepts developed in this section, such as human development, stages of learning, motivation, and self-concept are contextualised in terms of theories of cognitive development and theories of social, emotional and moral development. The author, Colin Marsh, draws on his extensive experience as an educator to structure the narrative of chapters in this part via checklists for observation, summary tables, sample strategies for teaching at specific stages of student development, and questions under the heading ‘your turn’. Case studies such as ‘How I use Piaget in my teaching’ make that essential link between theory and practice, something which pre-service teachers struggle with in the early phases of their university course. I was pleased to see that Marsh also explores the contentious and debated aspects of these theoretical frameworks to demonstrate that pre-service teachers must engage with and critique the ways in which theories about teaching and learning are applied. Marsh weaves in key quotations and important references into each chapter’s narrative and concludes every chapter with summary comments, reflection activities, lists of important references and useful web sources. As one would expect of a book published in 2008, Becoming a Teacher is informed by the most recent reports of classroom practice, current policy initiatives and research.