969 resultados para First-principles calculations
Resumo:
Thesis (D.M.A.)--University of Washington, 2016-05
Resumo:
Density functional theory calculations were used to investigate the mechanisms of NO-carbon and N2O-carbon reactions. It was the first time that the importance of surface nitrogen groups was addressed in the kinetic behaviors of the NO-carbon reaction. It was found that the off-plane nitrogen groups that are adjacent to the zigzag edge sites and in-plane nitrogen groups that are located on the armchair sites make the bond energy of oxygen desorption even ca. 20% lower than that of the off-plane epoxy group adjacent to zigzag edge sites and in-plane o-quinone oxygen atoms on armchair sites; this may explain the reason why the experimentally obtained activation energy of the NO-carbon reaction is ca. 20% lower than that of the O-2-carbon reaction over 923 K. A higher ratio of oxygen atoms can be formed in the N2O-carbon reaction, because of the lower dissociation energy of N2O, which results in a higher ratio of off-plane epoxy oxygen atoms. The desorption energy of semiquinone with double adjacent off-plane oxygen groups is ca. 20% less than that of semiquinone with only one adjacent off-plane oxygen group. This may be the reason why the activation energy of N2O is also ca. 20% less than that of the O-2-carbon reaction. The new mechanism can also provide a good qualitative comparison for the relative reaction rates of NO-, N2O-, and O-2-carbon reactions. The anisotropic characters of these gas-carbon reactions can also be well explained.
Resumo:
In this relatively short book, David Clark sets out to fill what he perceives to be a gap in the presently available writing on Australian public law by achieving two distinct objectives. The first is to remedy 'one of the oddest limitations of current public law writing in Australia' by detailing the history and operation of the state and territory constitutions as well as their philosophical underpinnings. The other is to explore certain areas of federal public law, such as the laws applicable to the constitution and operation of the Commonwealth Parliament and non-judicial bodies such as the Ombudsman, which are often not dealt with in leading constitutional and administrative law texts. It is acknowledged by the author that attempting to cover such a wide range of topics is a 'high-wire act'. Fortunately, apart from one slight stumble, Clark manages to keep his balance and has produced a useful précis of a number of the institutions and concepts that are fundamental to the orderly functioning of Australian society.
Resumo:
We explore the calculation of unimolecular bound states and resonances for deep-well species at large angular momentum using a Chebychev filter diagonalization scheme incorporating doubling of the autocorrelation function as presented recently by Neumaier and Mandelshtam [Phys. Rev. Lett. 86, 5031 (2001)]. The method has been employed to compute the challenging J=20 bound and resonance states for the HO2 system. The methodology has firstly been tested for J=2 in comparison with previous calculations, and then extended to J=20 using a parallel computing strategy. The quantum J-specific unimolecular dissociation rates for HO2-> H+O-2 in the energy range from 2.114 to 2.596 eV have been reported for the first time, and comparisons with the results of Troe and co-workers [J. Chem. Phys. 113, 11019 (2000) Phys. Chem. Chem. Phys. 2, 631 (2000)] from statistical adiabatic channel method/classical trajectory calculations have been made. For most of the energies, the reported statistical adiabatic channel method/classical trajectory rate constants agree well with the average of the fluctuating quantum-mechanical rates. Near the dissociation threshold, quantum rates fluctuate more severely, but their average is still in agreement with the statistical adiabatic channel method/classical trajectory results.
Resumo:
The parliamentary first speech is a site of discursive privilege that offers each parliamentarian an opportunity to articulate the principles and aspirations that underpin her or his entry into public life. When utilised by parliamentarians of Asian Australian backgrounds, these speeches embody a unique opportunity to comprehend how ethnic identity is performed amidst the numerous, competing interests by which legislators are bound and challenged. The construction and representation of Asian Australian identity in these contexts provide a fascinating opportunity to understand the junctures between ethnicity and Australian citizenship. This essay explores how Asian Australians may be subject to forms of 'coercive mimeticism' in certain social sites, and also how these hegemonic pressures may simultaneously present 'frames of enactment' through their performance.
Resumo:
Ab initio density functional theory (DFT) calculations are performed to study the adsorption of H-2 molecules on a Ti-doped Mg(0001) surface. We find that two hydrogen molecules are able to dissociate on top of the Ti atom with very small activation barriers (0.103 and 0.145 eV for the first and second H-2 molecules, respectively). Additionally, a molecular adsorption state of H-2 above the Ti atom is observed for the first time and is attributed to the polarization of the H-2 molecule by the Ti cation. Our results parallel recent findings for H-2 adsorption on Ti-doped carbon nanotubes or fullerenes. They provide new insight into the preliminary stages of hydrogen adsorption onto Ti-incorporated Mg surfaces.
Resumo:
An understanding of inheritance requires comprehension of genetic processes at all levels, from molecules to populations. Frequently genetics courses are separated into molecular and organismal genetics and students may fail to see the relationships between them. This is particularly true with human genetics, because of the difficulties in designing experimental approaches which are consistent with ethical restrictions, student abilities and background knowledge, and available time and materials. During 2005 we used analysis of single nucleotide polymorphisms (SNPs) in two genetic regions to enhance student learning and provide a practical experience in human genetics. Students scanned databases to discover SNPs in a gene of interest, used software to design PCR primers and a restriction enzyme based assay for the alleles, and carried out an analysis of the SNP on anonymous individual and family DNAs. The project occupied eight to ten hours per week for one semester, with some time spent in the laboratory and some spent in database searching, reading and writing the report. In completing their projects, students acquired a knowledge of Mendel’s first law (through looking at inheritance patterns), Mendel’s second law and the exceptions (the concepts of linkage and linkage disequilibrium), DNA structure (primer design and restriction enzyme analysis) and function (SNPs in coding and non-coding regions), population genetics and the statistical analysis of allele frequencies, genomics, bioinformatics and the ethical issues associated with the use of human samples. They also developed skills in presentation of results by publication and conference participation. Deficiencies in their understanding (for example of inheritance patterns, gene structure, statistical approaches and report writing) were detected and guidance given during the project. SNP analysis was found to be a powerful approach to enhance and integrate student understanding of genetic concepts.
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
East-West trade has grown rapidly since the sixties, stimulating a parallel expansion in the literature on the subject. An extensive review of this literature shows how: (i) most of the issues involved have at their source the distinctions between East and West in political ideology and/or economic management, and (ii) there has been a tendency to keep theoretical and practical perspectives on the subject too separate. This thesis demonstrates the importance of understanding the fundamental principles implied in the first point, and represents an attempt to bridge the gap identified in the second. A detailed study of the market for fire fighting equipment in Eastern Europe is undertaken in collaboration with a medium-sized company, Angus Fire Armour Limited. Desk research methods are combined with visits to the market to assess the potential for the company's products, and recommendations for future strategy are made. The case demonstrates the scope and limitations of various research methods for the East European market, and a model for market research relevant to all companies is developed. Tne case study highlights three areas largely neglected in the literature: (i) the problems of internal company adaptation to East European conditions; (ii) the division of responsibility between foreign trade organisations; and (iii) bribery and corruption in East-West trade. Further research into the second topic - through a survey of 36 UK exporters - and the third - through analysis of publicised corruption cases - confirms the representativeness of the Angus experience, and reflects on the complexity of the Bast European import process, which does not always function as is commonly supposed. The very complexity of the problems confronting companies reaffirms the need to appreciate the principles underlying the subject, while the detailed analysis into questions of, originally, a marketing nature, reveals wider implications for East-West trade and East-West relations.
Resumo:
Time after time… and aspect and mood. Over the last twenty five years, the study of time, aspect and - to a lesser extent - mood acquisition has enjoyed increasing popularity and a constant widening of its scope. In such a teeming field, what can be the contribution of this book? We believe that it is unique in several respects. First, this volume encompasses studies from different theoretical frameworks: functionalism vs generativism or function-based vs form-based approaches. It also brings together various sub-fields (first and second language acquisition, child and adult acquisition, bilingualism) that tend to evolve in parallel rather than learn from each other. A further originality is that it focuses on a wide range of typologically different languages, and features less studied languages such as Korean and Bulgarian. Finally, the book gathers some well-established scholars, young researchers, and even research students, in a rich inter-generational exchange, that ensures the survival but also the renewal and the refreshment of the discipline. The book at a glance The first part of the volume is devoted to the study of child language acquisition in monolingual, impaired and bilingual acquisition, while the second part focuses on adult learners. In this section, we will provide an overview of each chapter. The first study by Aviya Hacohen explores the acquisition of compositional telicity in Hebrew L1. Her psycholinguistic approach contributes valuable data to refine theoretical accounts. Through an innovating methodology, she gathers information from adults and children on the influence of definiteness, number, and the mass vs countable distinction on the constitution of a telic interpretation of the verb phrase. She notices that the notion of definiteness is mastered by children as young as 10, while the mass/count distinction does not appear before 10;7. However, this does not entail an adult-like use of telicity. She therefore concludes that beyond definiteness and noun type, pragmatics may play an important role in the derivation of Hebrew compositional telicity. For the second chapter we move from a Semitic language to a Slavic one. Milena Kuehnast focuses on the acquisition of negative imperatives in Bulgarian, a form that presents the specificity of being grammatical only with the imperfective form of the verb. The study examines how 40 Bulgarian children distributed in two age-groups (15 between 2;11-3;11, and 25 between 4;00 and 5;00) develop with respect to the acquisition of imperfective viewpoints, and the use of imperfective morphology. It shows an evolution in the recourse to expression of force in the use of negative imperatives, as well as the influence of morphological complexity on the successful production of forms. With Yi-An Lin’s study, we concentrate both on another type of informant and of framework. Indeed, he studies the production of children suffering from Specific Language Impairment (SLI), a developmental language disorder the causes of which exclude cognitive impairment, psycho-emotional disturbance, and motor-articulatory disorders. Using the Leonard corpus in CLAN, Lin aims to test two competing accounts of SLI (the Agreement and Tense Omission Model [ATOM] and his own Phonetic Form Deficit Model [PFDM]) that conflicts on the role attributed to spellout in the impairment. Spellout is the point at which the Computational System for Human Language (CHL) passes over the most recently derived part of the derivation to the interface components, Phonetic Form (PF) and Logical Form (LF). ATOM claims that SLI sufferers have a deficit in their syntactic representation while PFDM suggests that the problem only occurs at the spellout level. After studying the corpus from the point of view of tense / agreement marking, case marking, argument-movement and auxiliary inversion, Lin finds further support for his model. Olga Gupol, Susan Rohstein and Sharon Armon-Lotem’s chapter offers a welcome bridge between child language acquisition and multilingualism. Their study explores the influence of intensive exposure to L2 Hebrew on the development of L1 Russian tense and aspect morphology through an elicited narrative. Their informants are 40 Russian-Hebrew sequential bilingual children distributed in two age groups 4;0 – 4;11 and 7;0 - 8;0. They come to the conclusion that bilingual children anchor their narratives in perfective like monolinguals. However, while aware of grammatical aspect, bilinguals lack the full form-function mapping and tend to overgeneralize the imperfective on the principles of simplicity (as imperfective are the least morphologically marked forms), universality (as it covers more functions) and interference. Rafael Salaberry opens the second section on foreign language learners. In his contribution, he reflects on the difficulty L2 learners of Spanish encounter when it comes to distinguishing between iterativity (conveyed with the use of the preterite) and habituality (expressed through the imperfect). He examines in turn the theoretical views that see, on the one hand, habituality as part of grammatical knowledge and iterativity as pragmatic knowledge, and on the other hand both habituality and iterativity as grammatical knowledge. He comes to the conclusion that the use of preterite as a default past tense marker may explain the impoverished system of aspectual distinctions, not only at beginners but also at advanced levels, which may indicate that the system is differentially represented among L1 and L2 speakers. Acquiring the vast array of functions conveyed by a form is therefore no mean feat, as confirmed by the next study. Based on the prototype theory, Kathleen Bardovi-Harlig’s chapter focuses on the development of the progressive in L2 English. It opens with an overview of the functions of the progressive in English. Then, a review of acquisition research on the progressive in English and other languages is provided. The bulk of the chapter reports on a longitudinal study of 16 learners of L2 English and shows how their use of the progressive expands from the prototypical uses of process and continuousness to the less prototypical uses of repetition and future. The study concludes that the progressive spreads in interlanguage in accordance with prototype accounts. However, it suggests additional stages, not predicted by the Aspect Hypothesis, in the development from activities and accomplishments at least for the meaning of repeatedness. A similar theoretical framework is adopted in the following chapter, but it deals with a lesser studied language. Hyun-Jin Kim revisits the claims of the Aspect Hypothesis in relation to the acquisition of L2 Korean by two L1 English learners. Inspired by studies on L2 Japanese, she focuses on the emergence and spread of the past / perfective marker ¬–ess- and the progressive – ko iss- in the interlanguage of her informants throughout their third and fourth semesters of study. The data collected through six sessions of conversational interviews and picture description tasks seem to support the Aspect Hypothesis. Indeed learners show a strong association between past tense and accomplishments / achievements at the start and a gradual extension to other types; a limited use of past / perfective marker with states and an affinity of progressive with activities / accomplishments and later achievements. In addition, - ko iss– moves from progressive to resultative in the specific category of Korean verbs meaning wear / carry. While the previous contributions focus on function, Evgeniya Sergeeva and Jean-Pierre Chevrot’s is interested in form. The authors explore the acquisition of verbal morphology in L2 French by 30 instructed native speakers of Russian distributed in a low and high levels. They use an elicitation task for verbs with different models of stem alternation and study how token frequency and base forms influence stem selection. The analysis shows that frequency affects correct production, especially among learners with high proficiency. As for substitution errors, it appears that forms with a simple structure are systematically more frequent than the target form they replace. When a complex form serves as a substitute, it is more frequent only when it is replacing another complex form. As regards the use of base forms, the 3rd person singular of the present – and to some extent the infinitive – play this role in the corpus. The authors therefore conclude that the processing of surface forms can be influenced positively or negatively by the frequency of the target forms and of other competing stems, and by the proximity of the target stem to a base form. Finally, Martin Howard’s contribution takes up the challenge of focusing on the poorer relation of the TAM system. On the basis of L2 French data obtained through sociolinguistic interviews, he studies the expression of futurity, conditional and subjunctive in three groups of university learners with classroom teaching only (two or three years of university teaching) or with a mixture of classroom teaching and naturalistic exposure (2 years at University + 1 year abroad). An analysis of relative frequencies leads him to suggest a continuum of use going from futurate present to conditional with past hypothetic conditional clauses in si, which needs to be confirmed by further studies. Acknowledgements The present volume was inspired by the conference Acquisition of Tense – Aspect – Mood in First and Second Language held on 9th and 10th February 2008 at Aston University (Birmingham, UK) where over 40 delegates from four continents and over a dozen countries met for lively and enjoyable discussions. This collection of papers was double peer-reviewed by an international scientific committee made of Kathleen Bardovi-Harlig (Indiana University), Christine Bozier (Lund Universitet), Alex Housen (Vrije Universiteit Brussel), Martin Howard (University College Cork), Florence Myles (Newcastle University), Urszula Paprocka (Catholic University of Lublin), †Clive Perdue (Université Paris 8), Michel Pierrard (Vrije Universiteit Brussel), Rafael Salaberry (University of Texas at Austin), Suzanne Schlyter (Lund Universitet), Richard Towell (Salford University), and Daniel Véronique (Université d’Aix-en-Provence). We are very much indebted to that scientific committee for their insightful input at each step of the project. We are also thankful for the financial support of the Association for French Language Studies through its workshop grant, and to the Aston Modern Languages Research Foundation for funding the proofreading of the manuscript.
Resumo:
For the first time for the model of real-world forward-pumped fibre Raman amplifier with the randomly varying birefringence, the stochastic calculations have been done numerically based on the Kloeden-Platen-Schurz algorithm. The results obtained for the averaged gain and gain fluctuations as a function of polarization mode dispersion (PMD) parameter agree quantitatively with the results of previously developed analytical model. Simultaneously, the direct numerical simulations demonstrate an increased stochastisation (maximum in averaged gain variation) within the region of the polarization mode dispersion parameter of 0.1÷0.3 ps/km1/2. The results give an insight into margins of applicability of a generic multi-scale technique widely used to derive coupled Manakov equations and allow generalizing analytic model with accounting for pump depletion, group-delay dispersion and Kerr-nonlinearity that is of great interest for development of the high-transmission-rates optical networks.
Resumo:
A fejlett társadalmak egészségügyi szolgáltató rendszerei napjainkban kettős kihívással néznek szembe: miközben a társadalom a szolgáltatási színvonal emelkedését, a hibák számának a csökkenését várja el, addig a költségvetési terhek miatt a költségcsökkentés is feltétlenül szükséges. Ez a kihívás nagyságában összevethető azzal, amellyel az USA autóipara nézett szembe az 1970-es évektől. A megoldást az autóipar esetében a konkurens „lean” menedzsment elvek és eszközök megértése és alkalmazása jelentette. A tanulmány arra keresi a választ, hogy vajon lehetséges-e ennek a megoldásnak az alkalmazása az egészségügy esetében is. A cikk az egészségügy problémájának bemutatása után tárgyalja a lean menedzsment kialakulását és hogy milyen módon került köztudatba. A tanulmány második felében a szakirodalomban fellelhető, a témával kapcsolatos tapasztalatokat foglalja össze, majd levonja a következtetéseket. = In developed societies healthcare service systems are facing double challenge; society expects service level to rise and the number of mistakes to drop, but at the same time, because of the overloaded budgets, cutting cost is also absolutely necessary. This challenge compares to the one the US automotive industry was facing in the 1970-s. In case of the automotive industry the solution was the comprehension and application of the principles and the tools of lean management. This study aims to answer the question whether it is possible to apply this solution also in the case of the healthcare system. The article first introduces the problems in the healthcare system, than describes the formation of lean management concept and its wide spread. The second half of the study summarizes the available knowledge in the literature and drives conclusions.
Resumo:
Lutein is a principal constituent of the human macular pigment. This study is composed of two projects. The first studies the conformational geometries of lutein and its potential adaptability in biological systems. The second is a study of the response of human subjects to lutein supplements. Using semi-empirical parametric method 3 (PM3) and density functional theory with the B3LYP/6-31G* basis set, the relative energies of s- cis conformers of lutein were determined. All 512 s-cis conformers were calculated with PM3. A smaller, representative group was also studied using density functional theory. PM3 results were correlated systematically to B3LYP values and this enables the results to be calibrated. The relative energies of the conformers range from 1-30 kcal/mole, and many are dynamically accessible at normal temperatures. Four commercial formulations containing lutein were studied. The serum and macular pigment (MP) responses of human subjects to these lutein supplements with doses of 9 or 20 mg/day were measured, relative to a placebo, over a six month period. In each instance, lutein levels in serum increased and correlated with MP increases. The results demonstrate that responses are significantly dependent upon formulation and that components other than lutein have an important influence serum response.
Resumo:
Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.
Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.
In this work, we propose two methods for improving the efficiency of free energy calculations.
First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.
We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.
Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.
We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.
Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.
Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.