977 resultados para Work routine
Resumo:
This paper discusses innovations in curriculum development in the Department of Engineering at the University of Cambridge as a participant in the Teaching for Learning Network (TFLN), a teaching and learning development initiative funded by the Cambridge-MIT Institute a pedagogic collaboration and brokerage network. A year-long research and development project investigated the practical experiences through which students traditionally explore engineering disciplines, apply and extend the knowledge gained in lectures and other settings, and begin to develop their professional expertise. The research project evaluated current practice in these sessions and developed an evidence-base to identify requirements for new activities, student support and staff development. The evidence collected included a novel student 'practice-value' survey highlighting effective practice and areas of concern, classroom observation of practicals, semi-structured interviews with staff, a student focus group and informal discussions with staff. Analysis of the data identified three potentially 'high-leverage' strategies for improvement: development of a more integrated teaching framework, within which practical work could be contextualised in relation to other learning; a more transparent and integrated conceptual framework where theory and practice were more closely linked; development of practical work more reflective of the complex problems facing professional engineers. This paper sets out key elements of the evidence collected and the changes that have been informed by this evidence and analysis, leading to the creation of a suite of integrated practical sessions carefully linked to other course elements and reinforcing central concepts in engineering, accompanied by a training and support programme for teaching staff.
Resumo:
OBJECTIVE: This work is concerned with the creation of three-dimensional (3D) extended-field-of-view ultrasound from a set of volumes acquired using a mechanically swept 3D probe. 3D volumes of ultrasound data can be registered by attaching a position sensor to the probe; this can be an inconvenience in a clinical setting. A position sensor can also cause some misalignment due to patient movement and respiratory motion. We propose a combination of three-degrees-of-freedom image registration and an unobtrusively integrated inertial sensor for measuring orientation. The aim of this research is to produce a reliable and portable ultrasound system that is able to register 3D volumes quickly, making it suitable for clinical use. METHOD: As part of a feasibility study we recruited 28 pregnant females attending for routine obstetric scans to undergo 3D extended-field-of-view ultrasound. A total of 49 data sets were recorded. Each registered data set was assessed for correct alignment of each volume by two independent observers. RESULTS: In 77-83% of the data sets more than four consecutive volumes registered. The successful registration relies on good overlap between volumes and is adversely affected by advancing gestational age and foetal movement. CONCLUSION: The development of reliable 3D extended-field-of-view ultrasound may help ultrasound practitioners to demonstrate the anatomical relation of pathology and provide a convenient way to store data.
Resumo:
In any thermoacoustic analysis, it is important not only to predict linear frequencies and growth rates, but also the amplitude and frequencies of any limit cycles. The Flame Describing Function (FDF) approach is a quasi-linear analysis which allows the prediction of both the linear and nonlinear behaviour of a thermoacoustic system. This means that one can predict linear growth rates and frequencies, and also the amplitudes and frequencies of any limit cycles. The FDF achieves this by assuming that the acoustics are linear and that the flame, which is the only nonlinear element in the thermoacoustic system, can be adequately described by considering only its response at the frequency at which it is forced. Therefore any harmonics generated by the flame's nonlinear response are not considered. This implies that these nonlinear harmonics are small or that they are sufficiently filtered out by the linear dynamics of the system (the low-pass filter assumption). In this paper, a flame model with a simple saturation nonlinearity is coupled to simple duct acoustics, and the success of the FDF in predicting limit cycles is studied over a range of flame positions and acoustic damping parameters. Although these two parameters affect only the linear acoustics and not the nonlinear flame dynamics, they determine the validity of the low-pass filter assumption made in applying the flame describing function approach. Their importance is highlighted by studying the level of success of an FDF-based analysis as they are varied. This is achieved by comparing the FDF's prediction of limit-cycle amplitudes to the amplitudes seen in time domain simulations.
Resumo:
Relatively new in the UK, soil mix technology applied to the in-situ remediation of contaminated land involves the use of mixing tools and additives to construct permeable reactive in-ground barriers and low-permeability containment walls and for hot-spot soil treatment by stabilisation/ solidification. It is a cost effective and versatile approach with numerous environmental advantages. Further commercial advantages can be realised by combining this with ground improvement through the development of a single integrated soil mix technology system which is the core objective of Project SMiRT (Soil Mix Remediation Technology). This is a large UK-based R&D project involving academia-industry collaboration with a number of tasks including equipment development, laboratory treatability studies, field trials, stakeholder consultation and dissemination activities. This paper presents aspects of project SMiRT relating to the laboratory treatability study work leading to the design of the field trials. © 2012 American Society of Civil Engineers.
Resumo:
The interface dipole and its role in the effective work function (EWF) modulation by Al incorporation are investigated. Our study shows that the interface dipole located at the high-k/SiO2 interface causes an electrostatic potential difference across the metal/high-k interface, which significantly shifts the band alignment between the metal and high-k, consequently modulating the EWF. The electrochemical potential equalization and electrostatic potential methods are used to evaluate the interface dipole and its contribution. The calculated EWF modulation agrees with experimental data and can provide insight to the control of EWF in future pMOS technology.
Resumo:
The ZnO films deposited by magnetron sputtering were treated by H/O plasma. It is found that the field emission (FE) characteristics of the ZnO film are considerably improved after H-plasma treatment and slightly deteriorated after O-plasma treatment. The improvement of FE characteristics is attributed to the reduced work function and the increased conductivity of the ZnO H films. Conductive atomic force microscopy was employed to investigate the effect of the plasma treatment on the nanoscale conductivity of ZnO, these findings correlate well with the FE data and facilitate a clearer description of electron emission from the ZnO H films.
Resumo:
In our previous paper, the expanding cavity model (ECM) and Lame solution were used to obtain an analytical expression for the scale ratio between hardness (H) to reduced modulus (E-r) and unloading work (W-u) to total work (W-t) of indentation for elastic-perfectly plastic materials. In this paper, the more general work-hardening (linear and power-law) materials are studied. Our previous conclusions that this ratio depends mainly on the conical angle of indenter, holds not only for elastic perfectly-plastic materials, but also for work-hardening materials. These results were also verified by numerical simulations.
Resumo:
Organic thin-film transistors (OTFTs) using high dielectric constant material tantalum pentoxide (Ta2O5) and benzocyclobutenone (BCBO) derivatives as double-layer insulator were fabricated. Three metals with different work function, including Al (4.3 eV), Cr (4.5 eV) and Au (5.1 eV), were employed as gate electrodes to study the correlation between work function of gate metals and hysteresis characteristics of OTFTs. The devices with low work function metal Al or Cr as gate electrode exhibited high hysteresis (about 2.5 V threshold voltage shift). However, low hysteresis (about 0.7 V threshold voltage shift) OTFTs were attained based on high work function metal Au as gate electrode.
Resumo:
Some G-quadruplex DNA aptamers have been found to strongly bind hemin to form DNAzymes with peroxidase-like activity. To help determine the most suitable DNAzymes and to understand how they work, five previously reported G-quadruplex aptamers were compared for their binding affinity and then the potential catalytic mechanism of their corresponding hemin-G-quadruplex DNAzymes was explored. Among these aptamers, a G-quadruplex named AGRO100 was shown to possess the highest hemin-binding affinity and the best DNAzyme function. This means that AGRO100 is the most ideal candidate for DNAzyme-based analysis. Furthermore, we found the peroxidase-like activity of DNAzyme to be primarily dependent on the concentration of H2O2 and independent of that of the peroxidase substrate (that is, 2,2-azino-bis(3-ethytbenzothiazoline-6-sulfonic acid)diammonium salt). Accordingly, a reaction mechanism for DNAzyme-catalyzed peroxidation is proposed. This study provides new insights into the G-quadruplex-based DNAzymes and will help us to further extend their applications in the analytical field.
Resumo:
Recent research carried out at the Chinese Institute of Applied Chemistry has contributed significantly to the understanding of the radiation chemistry of polymers. High energy radiation has been successfully used to cross-link fluoropolymers and polyimides. Here chain flexibility has been shown to play an important role, and T-type structures were found to exist in the cross-linked fluoropolymers. A modified Charlesby-Pinner equation, based upon the importance of chain flexibility, was developed to account for the sol-radiation dose relationship in systems of this type. An XPS method has been developed to measure the cross-linking yields in aromatic polymers and fluoropolymers, based upon the dose dependence of the aromatic shake-up peaks and the F/C ratios, respectively. Methods for radiation cross-linking degrading polymers in polymer blends have also been developed, as have methods for improving the radiation resistance of polymers through radiation cross-linking.
Resumo:
Seismic exploration is the main tools of exploration for petroleum. as the society needs more petroleum and the level of exploration is going up, the exploration in the area of complex geology construction is the main task in oil industry, so the seismic prestack depth migration appeared, it has good ability for complex construction imaging. Its result depends on the velocity model strongly. So for seismic prestack depth migration has become the main research area. In this thesis the difference in seismic prestack depth migration between our country and the abroad has been analyzed in system. the tomographical method with no layer velocity model, the residual curve velocity analysical method based on velocity model and the deleting method in pre-processing have been developed. In the thesis, the tomographysical method in velocity analysis is been analyzed at first. It characterized with perfection in theory and diffculity in application. This method use the picked first arrivial, compare the difference between the picked first arrival and the calculated arrival in theory velocity model, and then anti-projected the difference along the ray path to get the new velocity model. This method only has the hypothesis of high frequency, no other hypothesis. So it is very effective and has high efficiency. But this method has default still. The picking of first arrival is difficult in the prestack data. The reasons are the ratio of signal to noise is very low and many other event cross each other in prestack data. These phenomenon appear strongly in the complex geology construction area. Based on these a new tomophysical methos in velocity analysis with no layer velocity model is been developed. The aim is to solve the picking problem. It do not need picking the event time contiunely. You can picking in random depending on the reliability. This methos not only need the pick time as the routine tomographysical mehtod, but also the slope of event. In this methos we use the high slope analysis method to improve the precision of picking. In addition we also make research on the residual curve velocity analysis and find that its application is not good and the efficiency is low. The reasons is that the hypothesis is rigid and it is a local optimizing method, it can solve seismic velocity problem in the area with laterical strong velocity variation. A new method is developed to improve the precision of velocity model building . So far the pattern of seismic prestack depth migration is the same as it aborad. Before the work of velocity building the original seismic data must been corrected on a datum plane, and then to make the prestack depth migration work. As we know the successful example is in Mexico bay. It characterized with the simple surface layer construction, the pre-precessing is very simple and its precision is very high. But in our country the main seismic work is in land, the surface layer is very complex, in some area the error of pre-precessing is big, it affect the velocity building. So based on this a new method is developed to delete the per-precessing error and improve the precision of velocity model building. Our main work is, (1) developing a effective tomographical velocity building method with no layer velocity model. (2) a new high resolution slope analysis method is developed. (3) developing a global optimized residual curve velocity buliding method based on velocity model. (4) a effective method of deleting the pre-precessing error is developing. All the method as listed above has been ceritified by the theorical calculation and the actual seismic data.
Resumo:
I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.
Resumo:
SIN and SOLDIER are heuristic programs in LISP which solve symbolic integration problems. SIN (Symbolic INtegrator) solves indefinite integration problems at the difficulty approaching those in the larger integral tables. SIN contains several more methods than are used in the previous symbolic integration program SAINT, and solves most of the problems attempted by SAINT in less than one second. SOLDIER (SOLution of Ordinary Differential Equations Routine) solves first order, first degree ordinary differential equations at the level of a good college sophomore and at an average of about five seconds per problem attempted. The differences in philosophy and operation between SAINT and SIN are described, and suggestions for extending the work presented are made.