940 resultados para optical pupil filters with sine functions
Resumo:
Plasma or "dry" etching is an essential process for the production of modern microelectronic circuits. However, despite intensive research, many aspects of the etch process are not fully understood. The results of studies of the plasma etching of Si and Si02 in fluorine-containing discharges, and the complementary technique of plasma polymerisation are presented in this thesis. Optical emission spectroscopy with argon actinometry was used as the principle plasma diagnostic. Statistical experimental design was used to model and compare Si and Si02 etch rates in CF4 and SF6 discharges as a function of flow, pressure and power. Etch mechanisms m both systems, including the potential reduction of Si etch rates in CF4 due to fluorocarbon polymer formation, are discussed. Si etch rates in CF4 /SF6 mixtures were successfully accounted for by the models produced. Si etch rates in CF4/C2F6 and CHF3 as a function of the addition of oxygen-containing additives (02, N20 and CO2) are shown to be consistent with a simple competition between F, 0 and CFx species for Si surface sites. For the range of conditions studied, Si02 etch rates were not dependent on F-atom concentration, but the presence of fluorine was essential in order to achieve significant etch rates. The influence of a wide range of electrode materials on the etch rate of Si and Si02 in CF4 and CF4 /02 plasmas was studied. It was found that the Si etch rate in a CF4 plasma was considerably enhanced, relative to an anodised aluminium electrode, in the presence of soda glass or sodium or potassium "doped" quartz. The effect was even more pronounced in a CF4 /02 discharge. In the latter system lead and copper electrodes also enhanced the Si etch rate. These results could not be accounted for by a corresponding rise in atomic fluorine concentration. Three possible etch enhancement mechanisms are discussed. Fluorocarbon polymer deposition was studied, both because of its relevance to etch mechanisms and its intrinsic interest, as a function of fluorocarbon source gas (CF4, C2F6, C3F8 and CHF3), process time, RF power and percentage hydrogen addition. Gas phase concentrations of F, H and CF2 were measured by optical emission spectroscopy, and the resultant polymer structure determined by X-ray photoelectron spectroscopy and infrared spectroscopy. Thermal and electrical properties were measured also. Hydrogen additions are shown to have a dominant role in determining deposition rate and polymer composition. A qualitative description of the polymer growth mechanism is presented which accounts for both changes in growth rate and structure, and leads to an empirical deposition rate model.
Resumo:
We demonstrate a novel glucose sensor based on an optical fiber grating with an excessively tilted index fringe structure and its surface modified by glucose oxidase (GOD). The aminopropyltriethoxysilane (APTES) was utilized as binding site for the subsequent GOD immobilization. Confocal microscopy and fluorescence microscope were used to provide the assessment of the effectiveness in modifying the fiber surface. The resonance wavelength of the sensor exhibited red-shift after the binding of the APTES and GOD to the fiber surface and also in the glucose detection process. The red-shift of the resonance wavelength showed a good linear response to the glucose concentration with a sensitivity of 0.298nm(mg/ml)-1 in the very low concentration range of 0.0∼3.0mg/ml. Compared to the previously reported glucose sensor based on the GOD-immobilized long period grating (LPG), the 81° tilted fiber grating (81°-TFG) based sensor has shown a lower thermal cross-talk effect, better linearity and higher Q-factor in sensing response. In addition, its sensitivity for glucose concentration can be further improved by increasing the grating length and/or choosing a higher-order cladding mode for detection. Potentially, the proposed techniques based on 81°-TFG can be developed as sensitive, label free and micro-structural sensors for applications in food safety, disease diagnosis, clinical analysis and environmental monitoring.
Resumo:
Modern electron optical techniques together with X-ray and mineralogical examination have been used to study the occurrence and form of phosphorus bearing minerals in iron ores. Three ores have been studied - Bahariya and Aswan from Egypt and Frodingham ironstone from U.K. The iron in the Bahariya iron ore is mainly as hematite and goethite. The gangue minerals are halite, gypsum, barytes, quartz and calcite. Iron content is between 49.8 to 63.2% and phosphorus 0.14 to 0.34%. The phosphorus occurs as very fine particles of apatite which are distributed throughout the ore. Removal of the phosphorus would require very fine grinding followed by acid leaching. Aswan iron ore is an oolitic iron ore; the iron content between 41-57% and phosphorus content 0.1 to 2.9%. It is mainly hematitic with variable quantities of quartz, apatite and small amount of clay minerals. In the oolitic iron ore beds, apatite occurs in the hematite matrix; filling in the pores of the oolithic surfaces, or as matrix cementing the ooliths with the hematite grains. In sandstone claybeds the distribution of the apatite is mainly in the matrix. It is suggested that the liberation size for the apatite would be -80 m and flotation concentration could be applied for the removal of apatite from Aswan ore. Frodingham ironstone occurs in the lower Jurassic bed of the South Humberside area. The average iron content is 25% and the phosphorus is 0.32%. Seven mineral phases were identified by X-ray; calcite, quartz, chamosite, hematite, siderite, apatite, and chlorite. Apatite occurs as very fine grains in the hematite and chamosite ooliths; as matrix of fine grains intergrown with chamosite and calcite grains; and as anhedral and sub rounded grains in the ooliths (8-28 m). It is suggested that two processes are possible for the dephosphorisation; the Flox process or a reduction roast followed by fine grinding, magnetic separation, and acid leaching.
Resumo:
The study utilised a Normal group, an Ocular Hypertensive (OHT) group and a Primary Open Angle Glaucoma (POAG) group to investigate two aspects. Firstly, the within- and between-visit variability for stereometric measurements of the optic nerve head (ONH) using the Heidelberg Retina Tomograph (HRT); retinal nerve fibre layer (RNFL) thickness using the HRT and using optical coherence tomography with the Optical Coherence Tomography Scanner (OCT); the visual field using white-on-white (W-W), short-wavelength (SWAP) and Frequency Doubling perimetry (FDT); and retinal haemodynamics using the Heidelberg Retinal Flowmeter (HRF). Secondly, the association demonstrated between some of the derived variables. The within- and between-visit variability for stereometric measurements of the entire ONH and the between-visit variability for sectoral measurements were similar for Normals and OHTs but greater for POAGs. The within-visit variability of the visual field pointwise parameters for SWAP were greater than for W-W and FDT particularly with increase in eccentricity and for the OHT group. The between-visit variability increased with increase in defect depth for the POAG group, across all types of perimetry. The MS was greater, the MD and PSD smaller and the examination duration shorter in FDT compared to W-W and SWAP across all groups. The within-visit variability was less than the between-visit variability for the OCT circumferential and sector RNFL thickness using the 1.5R, 2.0R and the fixed 1.73mm circular scan radii, across the three groups. The variability increased with decrease in the RNFL thickness, and was least for the 2.0R scan radius.
Resumo:
Previous studies into student volunteering have shown how formally organized volunteering activities have social, economic and practical benefits for student volunteers and the recipients of their volunteerism (Egerton, 2002; Vernon & Foster, 2002); moreover student volunteering provides the means by which undergraduates are able to acquire and hone transferable skills sought by employers following graduation (Eldridge & Wilson, 2003; Norris et al, 2006). Within the UK Higher Education Sector, a popular mechanism for accessing volunteering is through formally organized student mentoring programmes whereby more ‘senior’ students volunteer to mentor less experienced undergraduates through a particular phase of their academic careers, including the transition from school or college to university. The value of student mentoring as a pedagogical tool within Higher Education is reflected in the literature (see for example, Bargh & Schul, 1980, Hartman,1990, Woodd, 1997). However, from a volunteering perspective, one of the key issues relates to the generally accepted conceptualisation of volunteering as a formally organized activity, that is un-coerced and for which there is no payment (Davis Smith, 1992, 1998; Sheard, 1995). Although the majority of student mentoring programs discussed in the paper are unpaid and voluntary in nature, in a small number of institutions some of the mentoring programs offered to students provide a minimum wage for mentors. From an ethical perspective, such payments may cause difficulties when considering potential mentors’ motivations and reasons for participating in the program. Additionally, institutions usually only have one or two paid mentoring programs running alongside several voluntary programmes – sometimes resulting in an over-subscription for places as paid mentors to the detriment of unpaid programs. Furthermore, from an institutional perspective, student mentoring presents a set of particular ethical problems reflecting issues around ‘matching’ mentors and mentees in terms of gender, race, ethnicity and religion. This is found to be the case in some ‘targeted’ mentoring programs whereby a particular demographic group of students are offered access to mentoring in an attempt to improve their chances of academic success. This paper provides a comparative analysis of the experiences and perceptions of mentors and mentees participating in a wide-range of different mentoring programs. It also analyzes the institutional challenges and benefits associated with managing large scale student volunteering programs. In doing so the paper adds to third sector literature by critiquing the distinctive issues surrounding student volunteering and by discussing, in-depth, the management of large groups of student volunteers. From a public policy perspective, the economic, educational, vocational and social outcomes of student volunteering make this an important subject meriting investigation. Little is known about the mentoring experiences of student volunteers with regards to the ‘added value’ of participating in campus-based volunteering activities. Furthermore, in light of the current economic downturn, by drawing attention to the contribution that student volunteering plays in equipping undergraduates with transferable ‘employability’ related skills and competencies (Andrews & Higson, 2008), this paper makes an important contribution to current educational and political debates. In addition to providing the opportunity for students to acquire key transferable skills, the findings suggest that mentoring encourages students to volunteer in other areas of university and community life. The paper concludes by arguing that student mentoring provides a valuable learning experience for student volunteer mentors and for the student and pupil mentees with whom they are placed.
Resumo:
We present results on characterization of lasers with ultra-long cavity lengths up to 84km, the longest cavity ever reported. We have analyzed the mode structure, shape and width of the generated spectra, intensity fluctuations depending on length and intra-cavity power. The RF spectra exhibit an ultra-dense cavity mode structure (mode spacing is 1.2kHz for 84km), in which the width of the mode beating is proportional to the intra-cavity power while the optical spectra broaden with power according to the square-root law acquiring a specific shape with exponential wings. A model based on wave turbulence formalism has been developed to describe the observed effects.
Resumo:
Quality, production and technological innovation management rank among the most important matters of concern to modern manufacturing organisations. They can provide companies with the decisive means of gaining a competitive advantage, especially within industries where there is an increasing similarity in product design and manufacturing processes. The papers in this special issue of International Journal of Technology Management have all been selected as examples of how aspects of quality, production and technological innovation can help to improve competitive performance. Most are based on presentations made at the UK Operations Management Association's Sixth International Conference held at Aston University at which the theme was 'Getting Ahead Through Technology and People'. At the conference itself over 80 papers were presented by authors from 15 countries around the world. Among the many topics addressed within the conference theme, technological innovation, quality and production management emerged as attracting the greatest concern and interest of delegates, particularly those from industry. For any new initiative to be implemented successfully, it should be led from the top of the organization. Achieving the desired level of commitment from top management can, however, be a difficulty. In the first paper of this issue, Mackness investigates this question by explaining how systems thinking can help. In the systems approach, properties such as 'emergence', 'hierarchy', 'commnication' and 'control' are used to assist top managers in preparing for change. Mackness's paper is then complemented by Iijima and Hasegawa's contribution in which they investigate the development of Quality Information Management (QIM) in Japan. They present the idea of a Design Review and demonstrate how it can be used to trace and reduce quality-related losses. The next paper on the subject of quality is by Whittle and colleagues. It relates to total quality and the process of culture change within organisations. Using the findings of investigations carried out in a number of case study companies, they describe four generic models which have been identified as characterising methods of implementing total quality within existing organisation cultures. Boaden and Dale's paper also relates to the management of quality, but looks specifically at the construction industry where it has been found there is still some confusion over the role of Quality Assurance (QA) and Total Quality Management (TQM). They describe the results of a questionnaire survey of forty companies in the industry and compare them to similar work carried out in other industries. Szakonyi's contribution then completes this group of papers which all relate specifically to the question of quality. His concern is with the two ways in which R&D or engineering managers can work on improving quality. The first is by improving it in the laboratory, while the second is by working with other functions to improve quality in the company. The next group of papers in this issue all address aspects of production management. Umeda's paper proposes a new manufacturing-oriented simulation package for production management which provides important information for both design and operation of manufacturing systems. A simulation for production strategy in a Computer Integrated Manufacturing (CIM) environment is also discussed. This paper is then followed by a contribution by Tanaka and colleagues in which they consider loading schedules for manufacturing orders in a Material Requirements Planning (MRP) environment. They compare mathematical programming with a knowledge-based approach, and comment on their relative effectiveness for different practical situations. Engstrom and Medbo's paper then looks at a particular aspect of production system design, namely the question of devising group working arrangements for assembly with new product structures. Using the case of a Swedish vehicle assembly plant where long cycle assembly work has been adopted, they advocate the use of a generally applicable product structure which can be adapted to suit individual local conditions. In the last paper of this particular group, Tay considers how automation has affected the production efficiency in Singapore. Using data from ten major industries he identifies several factors which are positively correlated with efficiency, with capital intensity being of greatest interest to policy makers. The two following papers examine the case of electronic data interchange (EDI) as a means of improving the efficiency and quality of trading relationships. Banerjee and Banerjee consider a particular approach to material provisioning for production systems using orderless inventory replenishment. Using the example of a single supplier and multiple buyers they develop an analytical model which is applicable for the exchange of information between trading partners using EDI. They conclude that EDI-based inventory control can be attractive from economic as well as other standpoints and that the approach is consistent with and can be instrumental in moving towards just-in-time (JIT) inventory management. Slacker's complementary viewpoint on EDI is from the perspective of the quality relation-ship between the customer and supplier. Based on the experience of Lucas, a supplier within the automotive industry, he concludes that both banks and trading companies must take responsibility for the development of payment mechanisms which satisfy the requirements of quality trading. The three final papers of this issue relate to technological innovation and are all country based. Berman and Khalil report on a survey of US technological effectiveness in the global economy. The importance of education is supported in their conclusions, although it remains unclear to what extent the US government can play a wider role in promoting technological innovation and new industries. The role of technology in national development is taken up by Martinsons and Valdemars who examine the case of the former Soviet Union. The failure to successfully infuse technology into Soviet enterprises is seen as a factor in that country's demise, and it is anticipated that the newly liberalised economies will be able to encourage greater technological creativity. This point is then taken up in Perminov's concluding paper which looks in detail at Russia. Here a similar analysis is made of the concluding paper which looks in detail at Russia. Here a similar analysis is made of the Soviet Union's technological decline, but a development strategy is also presented within the context of the change from a centralised to a free market economy. The papers included in this special issue of the International Journal of Technology Management each represent a unique and particular contribution to their own specific area of concern. Together, however, they also argue or demonstrate the general improvements in competitive performance that can be achieved through the application of modern principles and practice to the management of quality, production and technological innovation.
Resumo:
Metaphors have been increasingly associated with cognitive functions, which means that metaphors structure how we think and express ourselves. Metaphors are embodied in our basic physical experience, which is one reason why certain abstract concepts are expressed in more concrete terms, such as visible entities, journeys, and other types of movement, spaces etc. This communicative relevance also applies to specialised, institutionalised settings and genres, such as those produced in or related to higher education institutions, among which is spoken academic discourse. A significant research gap has been identified regarding spoken academic discourse and metaphors therein, but also given the fact that with increasing numbers of students in higher education and international research and cooperation e.g. in the form of invited lectures, spoken academic discourse can be seen as nearly omnipresent. In this context, research talks are a key research genre. A mixed methods study has been conducted, which investigates metaphors in a corpus of eight fully transcribed German and English L1 speaker conference talks and invited lectures, totalling to 440 minutes. A wide range of categories and functions were identified in the corpus. Abstract research concepts, such as results or theories are expressed in terms of concrete visual entities that can be seen or shown, but also in terms of journeys or other forms of movement. The functions of these metaphors are simplification, rhetorical emphasis, theory-construction, or pedagogic illustration. For both the speaker and the audience or discussants, anthropomorphism causes abstract and complex ideas to become concretely imaginable and at the same time more interesting because the contents of the talk appear to be livelier and hence closer to their own experience, which ensures the audience’s attention. These metaphor categories are present in both the English and the German sub corpus of this study with similar functions.
Resumo:
Recently, temporal and statistical properties of quasi-CW fiber lasers have attracted a great attention. In particular, properties of Raman fiber laser (RFLs) have been studied both numerically and experimentally [1,2]. Experimental investigation is more challengeable, as the full generation optical bandwidth (typically hundreds of GHz for RFLs) is much bigger than real-time bandwidth of oscilloscopes (up to 60GHz for the newest models). So experimentally measured time dynamics is highly bandwidth averaged and do not provide precise information about overall statistical properties. To overpass this, one can use the spectral filtering technique to study temporal and statistical properties within optical bandwidth comparable with measurement bandwidth [3] or indirect measurements [4]. Ytterbium-doped fiber lasers (YDFL) are more suitable for experimental investigation, as their generation spectrum usually 10 times narrower. Moreover, recently ultra-narrow-band generation has been demonstrated in YDFL [5] which provides in principle possibility to measure time dynamics and statistics in real time using conventional oscilloscopes. © 2013 IEEE.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT One of the current research trends in Enterprise Resource Planning (ERP) involves examining the critical factors for its successful implementation. However, such research is limited to system implementation, not focusing on the flexibility of ERP to respond to changes in business. Therefore, this study explores a combination system, made up of an ERP and informality, intended to provide organisations with efficient and flexible performance simultaneously. In addition, this research analyses the benefits and challenges of using the system. The research was based on socio-technical system (STS) theory which contains two dimensions: 1) a technical dimension which evaluates the performance of the system; and 2) a social dimension which examines the impact of the system on an organisation. A mixed method approach has been followed in this research. The qualitative part aims to understand the constraints of using a single ERP system, and to define a new system corresponding to these problems. To achieve this goal, four Chinese companies operating in different industries were studied, all of which faced challenges in using an ERP system due to complexity and uncertainty in their business environments. The quantitative part contains a discrete-event simulation study that is intended to examine the impact of operational performance when a company implements the hybrid system in a real-life situation. Moreover, this research conducts a further qualitative case study, the better to understand the influence of the system in an organisation. The empirical aspect of the study reveals that an ERP with pre-determined business activities cannot react promptly to unanticipated changes in a business. Incorporating informality into an ERP can react to different situations by using different procedures that are based on the practical knowledge of frontline employees. Furthermore, the simulation study shows that the combination system can achieve a balance between efficiency and flexibility. Unlike existing research, which emphasises a continuous improvement in the IT functions of an enterprise system, this research contributes to providing a definition of a new system in theory, which has mixed performance and contains both the formal practices embedded in an ERP and informal activities based on human knowledge. It supports both cost-efficiency in executing business transactions and flexibility in coping with business uncertainty.This research also indicates risks of using the system, such as using an ERP with limited functions; a high cost for performing informally; and a low system acceptance, owing to a shift in organisational culture. With respect to practical contribution, this research suggests that companies can choose the most suitable enterprise system approach in accordance with their operational strategies. The combination system can be implemented in a company that needs to operate a medium amount of volume and variety. By contrast, the traditional ERP system is better suited in a company that operates a high-level volume market, while an informal system is more suitable for a firm with a requirement for a high level of variety.
Resumo:
PURPOSE: To examine whether objective performance of near tasks is improved with various electronic vision enhancement systems (EVES) compared with the subject's own optical magnifier. DESIGN: Experimental study, randomized, within-patient design. METHODS: This was a prospective study, conducted in a hospital ophthalmology low-vision clinic. The patient population comprised 70 sequential visually impaired subjects. The magnifying devices examined were: patient's optimum optical magnifier; magnification and field-of-view matched mouse EVES with monitor or head-mounted display (HMD) viewing; and stand EVES with monitor viewing. The tasks performed were: reading speed and acuity; time taken to track from one column of print to the next; follow a route map, and locate a specific feature; and identification of specific information from a medicine label. RESULTS: Mouse EVES with HMD viewing caused lower reading speeds than stand EVES with monitor viewing (F = 38.7, P < .001). Reading with the optical magnifier was slower than with the mouse or stand EVES with monitor viewing at smaller print sizes (P < .05). The column location task was faster with the optical magnifier than with any of the EVES (F = 10.3, P < .001). The map tracking and medicine label identification task was slower with the mouse EVES with HMD viewing than with the other magnifiers (P < .01). Previous EVES experience had no effect on task performance (P > .05), but subjects with previous optical magnifier experience were significantly slower at performing the medicine label identification task with all of the EVES (P < .05). CONCLUSIONS: Although EVES provide objective benefits to the visually impaired in reading speed and acuity, together with some specific near tasks, some can be performed just as fast using optical magnification. © 2003 by Elsevier Inc. All rights reserved.
Resumo:
To extend our understanding of the early visual hierarchy, we investigated the long-range integration of first- and second-order signals in spatial vision. In our first experiment we performed a conventional area summation experiment where we varied the diameter of (a) luminance-modulated (LM) noise and (b) contrastmodulated (CM) noise. Results from the LM condition replicated previous findings with sine-wave gratings in the absence of noise, consistent with long-range integration of signal contrast over space. For CM, the summation function was much shallower than for LM suggesting, at first glance, that the signal integration process was spatially less extensive than for LM. However, an alternative possibility was that the high spatial frequency noise carrier for the CM signal was attenuated by peripheral retina (or cortex), thereby impeding our ability to observe area summation of CM in the conventional way. To test this, we developed the ''Swiss cheese'' stimulus of Meese and Summers (2007) in which signal area can be varied without changing the stimulus diameter, providing some protection against inhomogeneity of the retinal field. Using this technique and a two-component subthreshold summation paradigm we found that (a) CM is spatially integrated over at least five stimulus cycles (possibly more), (b) spatial integration follows square-law signal transduction for both LM and CM and (c) the summing device integrates over spatially-interdigitated LM and CM signals when they are co-oriented, but not when crossoriented. The spatial pooling mechanism that we have identified would be a good candidate component for amodule involved in representing visual textures, including their spatial extent.
Resumo:
The concept of knowledge is the central one used when solving the various problems of data mining and pattern recognition in finite spaces of Boolean or multi-valued attributes. A special form of knowledge representation, called implicative regularities, is proposed for applying in two powerful tools of modern logic: the inductive inference and the deductive inference. The first one is used for extracting the knowledge from the data. The second is applied when the knowledge is used for calculation of the goal attribute values. A set of efficient algorithms was developed for that, dealing with Boolean functions and finite predicates represented by logical vectors and matrices.
Resumo:
We present results on characterization of lasers with ultra-long cavity lengths up to 84km, the longest cavity ever reported. We have analyzed the mode structure, shape and width of the generated spectra, intensity fluctuations depending on length and intra-cavity power. The RF spectra exhibit an ultra-dense cavity mode structure (mode spacing is 1.2kHz for 84km), in which the width of the mode beating is proportional to the intra-cavity power while the optical spectra broaden with power according to the square-root law acquiring a specific shape with exponential wings. A model based on wave turbulence formalism has been developed to describe the observed effects.
Resumo:
Alzheimer's disease (AD) is the most common form of dementia, affecting more than 35 million people worldwide. Brain hypometabolism is a major feature of AD, appearing decades before cognitive decline and pathologic lesions. To date, the majority of studies on hypometabolism in AD have used transgenic animal models or imaging studies of the human brain. As it is almost impossible to validate these findings using human tissue, alternative models are required. In this study, we show that human stem cell-derived neuron and astrocyte cultures treated with oligomers of amyloid beta 1-42 (Aβ1-42) also display a clear hypometabolism, particularly with regard to utilization of substrates such as glucose, pyruvate, lactate, and glutamate. In addition, a significant increase in the glycogen content of cells was also observed. These changes were accompanied by changes in NAD+ /NADH, ATP, and glutathione levels, suggesting a disruption in the energy-redox axis within these cultures. The high energy demands associated with neuronal functions such as memory formation and protection from oxidative stress put these cells at particular risk from Aβ-induced hypometabolism. Further research using this model may elucidate the mechanisms associated with Aβ-induced hypometabolism.