969 resultados para first principles calculations
Resumo:
We report new experiments that test quantum dynamical predictions of polarization squeezing for ultrashort photonic pulses in a birefringent fiber, including all relevant dissipative effects. This exponentially complex many-body problem is solved by means of a stochastic phase-space method. The squeezing is calculated and compared to experimental data, resulting in excellent quantitative agreement. From the simulations, we identify the physical limits to quantum noise reduction in optical fibers. The research represents a significant experimental test of first-principles time-domain quantum dynamics in a one-dimensional interacting Bose gas coupled to dissipative reservoirs.
Resumo:
Whilst research on work group diversity has proliferated in recent years, relatively little attention has been paid to the precise definition of diversity or its measurement. One of the few studies to do so is Harrison and Klein’s (2007) typology, which defined three types of diversity – separation, variety and disparity – and suggested possible indices with which they should be measured. However, their typology is limited by its association of diversity types with variable measurement, by a lack of clarity over the meaning of variety, and by the absence of a clear guidance about which diversity index should be employed. In this thesis I develop an extended version of the typology, including four diversity types (separation, range, spread and disparity), and propose specific indices to be used for each type of diversity with each variable type (ratio, interval, ordinal and nominal). Indices are chosen or derived from first principles based on the precise definition of the diversity type. I then test the usefulness of these indices in predicting outcomes of diversity compared with other indices, using both an extensive simulated data set (to estimate the effects of mis-specification of diversity type or index) and eight real data sets (to examine whether the proposed indices produce the strongest relationships with hypothesised outcomes). The analyses lead to the conclusion that the indices proposed in the typology are at least as good as, and usually better than, other indices in terms of both measuring effect sizes and power to find significant results, and thus provide evidence to support the typology. Implications for theory and methodology are discussed.
Resumo:
PURPOSE: To evaluate theoretically three previously published formulae that use intra-operative aphakic refractive error to calculate intraocular lens (IOL) power, not necessitating pre-operative biometry. The formulae are as follows: IOL power (D) = Aphakic refraction x 2.01 [Ianchulev et al., J. Cataract Refract. Surg.31 (2005) 1530]; IOL power (D) = Aphakic refraction x 1.75 [Mackool et al., J. Cataract Refract. Surg.32 (2006) 435]; IOL power (D) = 0.07x(2) + 1.27x + 1.22, where x = aphakic refraction [Leccisotti, Graefes Arch. Clin. Exp. Ophthalmol.246 (2008) 729]. METHODS: Gaussian first order calculations were used to determine the relationship between intra-operative aphakic refractive error and the IOL power required for emmetropia in a series of schematic eyes incorporating varying corneal powers, pre-operative crystalline lens powers, axial lengths and post-operative IOL positions. The three previously published formulae, based on empirical data, were then compared in terms of IOL power errors that arose in the same schematic eye variants. RESULTS: An inverse relationship exists between theoretical ratio and axial length. Corneal power and initial lens power have little effect on calculated ratios, whilst final IOL position has a significant impact. None of the three empirically derived formulae are universally accurate but each is able to predict IOL power precisely in certain theoretical scenarios. The formulae derived by Ianchulev et al. and Leccisotti are most accurate for posterior IOL positions, whereas the Mackool et al. formula is most reliable when the IOL is located more anteriorly. CONCLUSION: Final IOL position was found to be the chief determinant of IOL power errors. Although the A-constants of IOLs are known and may be accurate, a variety of factors can still influence the final IOL position and lead to undesirable refractive errors. Optimum results using these novel formulae would be achieved in myopic eyes.
Resumo:
The increasing cost of developing complex software systems has created a need for tools which aid software construction. One area in which significant progress has been made is with the so-called Compiler Writing Tools (CWTs); these aim at automated generation of various components of a compiler and hence at expediting the construction of complete programming language translators. A number of CWTs are already in quite general use, but investigation reveals significant drawbacks with current CWTs, such as lex and yacc. The effective use of a CWT typically requires a detailed technical understanding of its operation and involves tedious and error-prone input preparation. Moreover, CWTs such as lex and yacc address only a limited aspect of the compilation process; for example, actions necessary to perform lexical symbol valuation and abstract syntax tree construction must be explicitly coded by the user. This thesis presents a new CWT called CORGI (COmpiler-compiler from Reference Grammar Input) which deals with the entire `front-end' component of a compiler; this includes the provision of necessary data structures and routines to manipulate them, both generated from a single input specification. Compared with earlier CWTs, CORGI has a higher-level and hence more convenient user interface, operating on a specification derived directly from a `reference manual' grammar for the source language. Rather than developing a compiler-compiler from first principles, CORGI has been implemented by building a further shell around two existing compiler construction tools, namely lex and yacc. CORGI has been demonstrated to perform efficiently in realistic tests, both in terms of speed and the effectiveness of its user interface and error-recovery mechanisms.
Resumo:
This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.
Resumo:
The recent explosive growth in advanced manufacturing technology (AMT) and continued development of sophisticated information technologies (IT) is expected to have a profound effect on the way we design and operate manufacturing businesses. Furthermore, the escalating capital requirements associated with these developments have significantly increased the level of risk associated with initial design, ongoing development and operation. This dissertation has examined the integration of two key sub-elements of the Computer Integrated Manufacturing (CIM) system, namely the manufacturing facility and the production control system. This research has concentrated on the interactions between production control (MRP) and an AMT based production facility. The disappointing performance of such systems has been discussed in the context of a number of potential technological and performance incompatibilities between these two elements. It was argued that the design and selection of operating policies for both is the key to successful integration. Furthermore, policy decisions are shown to play an important role in matching the performance of the total system to the demands of the marketplace. It is demonstrated that a holistic approach to policy design must be adopted if successful integration is to be achieved. It is shown that the complexity of the issues resulting from such an approach required the formulation of a structured design methodology. Such a methodology was subsequently developed and discussed. This combined a first principles approach to the behaviour of system elements with the specification of a detailed holistic model for use in the policy design environment. The methodology aimed to make full use of the `low inertia' characteristics of AMT, whilst adopting a JIT configuration of MRP and re-coupling the total system to the market demands. This dissertation discussed the application of the methodology to an industrial case study and the subsequent design of operational policies. Consequently a novel approach to production control resulted. A central feature of which was a move toward reduced manual intervention in the MRP processing and scheduling logic with increased human involvement and motivation in the management of work-flow on the shopfloor. Experimental results indicated that significant performance advantages would result from the adoption of the recommended policy set.
Resumo:
This thesis records the design and development of an electrically driven, air to water, vapour compression heat pump of nominally 6kW heat output, for residential space heating. The study was carried out on behalf of GEC Research Ltd through the Interdisciplinary Higher Degrees Scheme at Aston University. A computer based mathematical model of the vapour compression cycle was produced as a design aid, to enable the effects of component design changes or variations in operating conditions to be predicted. This model is supported by performance testing of the major components, which revealed that improvements in the compressor isentropic efficiency offer the greatest potential for further increases in cycle COPh. The evaporator was designed from first principles, and is based on wire-wound heat transfer tubing. Two evaporators, of air side area 10.27 and 16.24m2, were tested in a temperature and humidity controlled environment, demonstrating that the benefits of the large coil are greater heat pump heat output and lower noise levels. A systematic study of frost growth rates suggested that this problem is most severe at the conditions of saturated air at 0oC combined with low condenser water temperature. A dynamic simulation model was developed to predict the in-service performance of the heat pump. This study confirmed the importance of an adequate radiator area for heat pump installations. A prototype heat pump was designed and manufactured, consisting of a hermetic reciprocating compressor, a coaxial tube condenser and a helically coiled evaporator, using Refrigerant 22. The prototype was field tested in a domestic environment for one and a half years. The installation included a comprehensive monitoring system. Initial problems were encountered with defrosting and compressor noise, both of which were solved. The unit then operated throughout the 1985/86 heating season without further attention, producing a COPh of 2.34.
Resumo:
The microscopic origin of the intermediate phase in two prototypical covalently bonded AxB1-x network glass forming systems, where A=Ge or Si, B=Se, and 0=x=1, was investigated by combining neutron diffraction with first-principles molecular-dynamics methods. Specifically, the structure of glassy GeSe4 and SiSe4 was examined, and the calculated total structure factor and total pair-correlation function for both materials are in good agreement with experiment. The structure of both glasses differs markedly from a simple model comprising undefective AB4 corner-sharing tetrahedra in which all A atoms are linked by B2 dimers. Instead, edge-sharing tetrahedra occur and the twofold coordinated Se atoms form three distinct structural motifs, namely, Se-Se2, Se-SeGe (or Se-SeSi), and Se-Ge2 (or Se-Si2). This identifies several of the conformations that are responsible for the structural variability in GexSe1-x and SixSe1-x glasses, a quantity that is linked to the finite width of the intermediate phase window.
Resumo:
Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.
Resumo:
The full set of partial structure factors for glassy germania, or GeO2, were accurately measured by using the method of isotopic substitution in neutron diffraction in order to elucidate the nature of the pair correlations for this archetypal strong glass former. The results show that the basic tetrahedral Ge(O-1/2)(4) building blocks share corners with a mean inter-tetrahedral Ge-O-Ge bond angle of 132(2)degrees. The topological and chemical ordering in the resultant network displays two characteristic length scales at distances greater than the nearest neighbour. One of these describes the intermediate range order, and manifests itself by the appearance of a first sharp diffraction peak in the measured diffraction patterns at a scattering vector k(FSDP) approximate to 1.53 angstrom(-1), while the other describes so-called extended range order, and is associated with the principal peak at k(PP) = 2.66( 1) angstrom(-1). We find that there is an interplay between the relative importance of the ordering on these length scales for tetrahedral network forming glasses that is dominated by the extended range ordering with increasing glass fragility. The measured partial structure factors for glassy GeO2 are used to reproduce the total structure factor measured by using high energy x-ray diffraction and the experimental results are also compared to those obtained by using classical and first principles molecular dynamics simulations.
Resumo:
Introduction: Production of functionalised particles using dry powder coating is a one-step, environmentally friendly process that paves the way for the development of particles with targeted properties and diverse functionalities. Areas covered: Applying the first principles in physical science for powders, fine guest particles can be homogeneously dispersed over the surface of larger host particles to develop functionalised particles. Multiple functionalities can be modified including: flowability, dispersibility, fluidisation, homogeneity, content uniformity and dissolution profile. The current publication seeks to understand the fundamental underpinning principles and science governing dry coating process, evaluate key technologies developed to produce functionalised particles along with outlining their advantages, limitations and applications and discusses in detail the resultant functionalities and their applications. Expert opinion: Dry particle coating is a promising solvent-free manufacturing technology to produce particles with targeted functionalities. Progress within this area requires the development of continuous processing devices that can overcome challenges encountered with current technologies such as heat generation and particle attrition. Growth within this field requires extensive research to further understand the impact of process design and material properties on resultant functionalities.
Resumo:
La création cinématographique de l’étudiante qui accompagne ce mémoire sous la forme d’un DVD est disponible à la Médiathèque de la Bibliothèque des lettres et sciences humaines sous le titre : YT Remix (documentaire) ; Sonorisation d'un extrait de L'homme à la caméra (D. Vertov).(https://umontreal.on.worldcat.org/oclc/957316713)
Resumo:
We use first-principles electronic structure methods to show that the piezoresistive strain gauge factor of single-crystalline bulk n-type silicon-germanium alloys at carefully controlled composition can reach values of G = 500, three times larger than that of silicon, the most sensitive such material used in industry today. At cryogenic temperatures of 4 K we find gauge factors of G = 135 000, 13 times larger than that observed in Si whiskers. The improved piezoresistance is achieved by tuning the scattering of carriers between different (Delta and L) conduction band valleys by controlling the alloy composition and strain configuration.
Resumo:
Modification of TiO2 with metal oxide nanoclusters such as FeOx, NiOx has been shown to be a promising approach to the design of new photocatalysts with visible light absorption and improved electron–hole separation. To study further the factors that determine the photocatalytic properties of structures of this type, we present in this paper a first principles density functional theory (DFT) investigation of TiO2 rutile(110) and anatase(001) modified with PbO and PbO2 nanoclusters, with Pb2+ and Pb4+ oxidation states. This allows us to unravel the effect of the Pb oxidation state on the photocatalytic properties of PbOx-modified TiO2. The nanoclusters adsorb strongly at all TiO2 surfaces, creating new Pb–O and Ti–O interfacial bonds. Modification with PbO and PbO2 nanoclusters introduces new states in the original band gap of rutile and anatase. However the oxidation state of Pb has a dramatic impact on the nature of the modifications of the band edges of TiO2 and on the electron–hole separation mechanism. PbO nanocluster modification leads to an upwards shift of the valence band which reduces the band gap and upon photoexcitation results in hole localisation on the PbO nanocluster and electron localisation on the surface. By contrast, for PbO2 nanocluster modification the hole will be localised on the TiO2 surface and the electron on the nanocluster, thus giving rise to two different band gap reduction and electron–hole separation mechanisms. We find no crystal structure sensitivity, with both rutile and anatase surfaces showing similar properties upon modification with PbOx. In summary the photocatalytic properties of heterostructures of TiO2 with oxide nanoclusters can be tuned by oxidation state of the modifying metal oxide, with the possibility of a reduced band gap causing visible light activation and a reduction in charge carrier recombination.
Resumo:
Cu(acac)2 is chemisorbed on TiO2 particles [P-25 (anatase/rutile = 4/1 w/w), Degussa] via coordination by surface Ti–OH groups without elimination of the acac ligand. Post-heating of the Cu(acac)2-adsorbed TiO2 at 773 K yields molecular scale copper(II) oxide clusters on the surface (CuO/TiO2). The copper loading amount (Γ/Cu ions nm–2) is controlled in a wide range by the Cu(acac)2 concentration and the chemisorption–calcination cycle number. Valence band (VB) X-ray photoelectron and photoluminescence spectroscopy indicated that the VB maximum of TiO2 rises up with increasing Γ, while vacant midgap levels are generated. The surface modification gives rise to visible-light activity and concomitant significant increase in UV-light activity for the degradation of 2-naphthol and p-cresol. Prolonging irradiation time leads to the decomposition to CO2, which increases in proportion to irradiation time. The photocatalytic activity strongly depends on the loading, Γ, with an optimum value of Γ for the photocatalytic activity. Electrochemical measurements suggest that the surface CuO clusters promote the reduction of adsorbed O2. First principles density functional theory simulations clearly show that, at Γ < 1, unoccupied Cu 3d levels are generated in the midgap region, and at Γ > 1, the VB maximum rises and the unoccupied Cu 3d levels move to the conduction band minimum of TiO2. These results suggest that visible-light excitation of CuO/TiO2 causes the bulk-to-surface interfacial electron transfer at low coverage and the surface-to-bulk interfacial electron transfer at high coverage. We conclude that the surface CuO clusters enhance the separation of photogenerated charge carriers by the interfacial electron transfer and the subsequent reduction of adsorbed O2 to achieve the compatibility of high levels of visible and UV-light activities.