937 resultados para First principles
First-Principles Study of the Electronic and Magnetic Properties of Defects in Carbon Nanostructures
Resumo:
Understanding the magnetic properties of graphenic nanostructures is instrumental in future spintronics applications. These magnetic properties are known to depend crucially on the presence of defects. Here we review our recent theoretical studies using density functional calculations on two types of defects in carbon nanostructures: Substitutional doping with transition metals, and sp$^3$-type defects created by covalent functionalization with organic and inorganic molecules. We focus on such defects because they can be used to create and control magnetism in graphene-based materials. Our main results are summarized as follows: i)Substitutional metal impurities are fully understood using a model based on the hybridization between the $d$ states of the metal atom and the defect levels associated with an unreconstructed D$_{3h}$ carbon vacancy. We identify three different regimes, associated with the occupation of distinct hybridization levels, which determine the magnetic properties obtained with this type of doping; ii) A spin moment of 1.0 $\mu_B$ is always induced by chemical functionalization when a molecule chemisorbs on a graphene layer via a single C-C (or other weakly polar) covalent bond. The magnetic coupling between adsorbates shows a key dependence on the sublattice adsorption site. This effect is similar to that of H adsorption, however, with universal character; iii) The spin moment of substitutional metal impurities can be controlled using strain. In particular, we show that although Ni substitutionals are non-magnetic in flat and unstrained graphene, the magnetism of these defects can be activated by applying either uniaxial strain or curvature to the graphene layer. All these results provide key information about formation and control of defect-induced magnetism in graphene and related materials.
Resumo:
In this paper, we carried out first-principles calculations in order to investigate the structural and electronic properties of the binary compound gallium antimonide (GaSb). This theoretical study was carried out using the Density Functional Theory within the plane-wave pseudopotential method. The effects ofexchange and correlation (XC) were treated using the functional Local Density Approximation (LDA), generalized gradient approximation (GGA): Perdew–Burke–Ernzerhof (PBE), Perdew-Burke-Ernzerhof revised for solids (PBEsol), Perdew-Wang91 (PW91), revised Perdew–Burke–Ernzerhof (rPBE), Armiento–Mattson 2005 (AM05) and meta-generalized gradient approximation (meta-GGA): Tao–Perdew– Staroverov–Scuseria (TPSS) and revised Tao–Perdew–Staroverov–Scuseria (RTPSS) and modified Becke-Johnson (MBJ). We calculated the densities of state (DOS) and band structure with different XC potentials identified and compared them with the theoretical and experimental results reported in the literature. It was discovered that functional: LDA, PBEsol, AM05 and RTPSS provide the best results to calculate the lattice parameters (a) and bulk modulus (B0); while for the cohesive energy (Ecoh), functional: AM05, RTPSS and PW91 are closer to the values obtained experimentally. The MBJ, Rtpss and AM05 values found for the band gap energy is slightly underestimated with those values reported experimentally.
Resumo:
Using first-principles methods, we studied the extrinsic defects doping in transparent conducting oxides CuMO2 (M=Sc, Y). We chose Be, Mg, Ca, Si, Ge, Sn as extrinsic defects to substitute for M and Cu atoms. By systematically calculating the impurity formation energy and transition energy level, we find that Be-Cu is the most prominent extrinsic donor and Ca-M is the prominent extrinsic acceptor. In addition, we find that Mg atom substituting for Sc is the most prominent extrinsic acceptor in CuSCO2. Our calculation results are expected to be a guide for preparing n-type and p-type materials through extrinsic doping in CuMO2 (M=SC, y). (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The origin of ferromagnetism in d(0) semiconductors is studied using first-principles methods with ZnO as a prototype material. We show that the presence of spontaneous magnetization in nitrides and oxides with sufficient holes is an intrinsic property of these first-row d(0) semiconductors and can be attributed to the localized nature of the 2p states of O and N. We find that acceptor doping, especially doping at the anion site, can enhance the ferromagnetism with much smaller threshold hole concentrations. The quantum confinement effect also reduces the critical hole concentration to induce ferromagnetism in ZnO nanowires. The characteristic nonmonotonic spin couplings in these systems are explained in terms of the band coupling model.
Resumo:
Longitudinal zone boundary X phonon frequencies have been calculated by a first principles pseudopotential method for III-V zincblende semiconductors AlP, AlAs, AlSb, GaP, GaAs, GaSb, InP, InAs and InSb. The phonon frequencies have been evaluated from total energy calculations in the frozen phonon approximation. The calculated phonon frequencies agree very well with the experimental values.
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
Includes bibliography
Resumo:
We have performed a series of first-principles electronic structure calculations to examine the reaction pathways and the corresponding free energy barriers for the ester hydrolysis of protonated cocaine in its chair and boat conformations. The calculated free energy barriers for the benzoyl ester hydrolysis of protonated chair cocaine are close to the corresponding barriers calculated for the benzoyl ester hydrolysis of neutral cocaine. However, the free energy barrier calculated for the methyl ester hydrolysis of protonated cocaine in its chair conformation is significantly lower than for the methyl ester hydrolysis of neutral cocaine and for the dominant pathway of the benzoyl ester hydrolysis of protonated cocaine. The significant decrease of the free energy barrier, ∼4 kcal/mol, is attributed to the intramolecular acid catalysis of the methyl ester hydrolysis of protonated cocaine, because the transition state structure is stabilized by the strong hydrogen bond between the carbonyl oxygen of the methyl ester moiety and the protonated tropane N. The relative magnitudes of the free energy barriers calculated for different pathways of the ester hydrolysis of protonated chair cocaine are consistent with the experimental kinetic data for cocaine hydrolysis under physiologic conditions. Similar intramolecular acid catalysis also occurs for the benzoyl ester hydrolysis of (protonated) boat cocaine in the physiologic condition, although the contribution of the intramolecular hydrogen bonding to transition state stabilization is negligible. Nonetheless, the predictability of the intramolecular hydrogen bonding could be useful in generating antibody-based catalysts that recruit cocaine to the boat conformation and an analog that elicited antibodies to approximate the protonated tropane N and the benzoyl O more closely than the natural boat conformer might increase the contribution from hydrogen bonding. Such a stable analog of the transition state for intramolecular catalysis of cocaine benzoyl-ester hydrolysis was synthesized and used to successfully elicit a number of anticocaine catalytic antibodies.
Resumo:
We have studied experimentally jump-to-contact (JC) and jump-out-of-contact (JOC) phenomena in gold electrodes. JC can be observed at first contact when two metals approach each other, while JOC occurs in the last contact before breaking. When the indentation depth between the electrodes is limited to a certain value of conductance, a highly reproducible behaviour in the evolution of the conductance can be obtained for hundreds of cycles of formation and rupture. Molecular dynamics simulations of this process show how the two metallic electrodes are shaped into tips of a well-defined crystallographic structure formed through a mechanical annealing mechanism. We report a detailed analysis of the atomic configurations obtained before contact and rupture of these stable structures and obtained their conductance using first-principles quantum transport calculations. These results help us understand the values of conductance obtained experimentally in the JC and JOC phenomena and improve our understanding of atomic-sized contacts and the evolution of their structural characteristics.
Resumo:
We present a first-principles density-functional calculation for the Raman spectra of a neutral BEDT-TTF molecule. Our results are in excellent agreement with experimental results. We show that a planar Structure is not a stable state of a neutral BEDT-TTF molecule. We consider three possible conformations and discuss their relation to disorder in these systems.
Resumo:
We consider the simplest relevant problem in the foaming of molten plastics, the growth of a single bubble in a sea of highly viscous Newtonian fluid, and without interference from other bubbles. This simplest problem has defied accurate solution from first principles. Despite plenty of research on foaming, classical approaches from first principles have neglected the temperature rise in the surrounding fluid, and we find that this oversimplification greatly accelerates bubble growth prediction. We use a transport phenomena approach to analyze the growth of a solitary bubble, expanding under its own pressure. We consider a bubble of ideal gas growing without the accelerating contribution from mass transfer into the bubble. We explore the roles of viscous forces, fluid inertia, and viscous dissipation. We find that bubble growth depends upon the nucleus radius and nucleus pressure. We begin with a detailed examination of the classical approaches (thermodynamics without viscous heating). Our failure to fit experimental data with these classical approaches, sets up the second part of our paper, a novel exploration of the essential decelerating role of viscous heating. We explore both isothermal and adiabatic bubble expansion, and also the decelerating role of surface tension. The adiabatic analysis accounts for the slight deceleration due to the cooling of the expanding gas, which depends on gas polyatomicity. We also explore the pressure profile, and the components of the extra stress tensor, in the fluid surrounding the growing bubble. These stresses can eventually be frozen into foamed plastics. We find that our new theory compares well with measured bubble behavior.
Resumo:
This paper focuses on the varying approaches and methodologies adopted when the calculation of holding costs is undertaken, focusing on greenfield development. Whilst acknowledging there may be some consistency in embracing first principles relating to holding cost theory, a review of the literature reveals considerable lack of uniformity in this regard. There is even less clarity in quantitative determination, especially in Australia where there has been only limited empirical analysis undertaken. Despite a growing quantum of research undertaken in relation to various elements connected with housing affordability, the matter of holding costs has not been well addressed regardless of its part in the highly prioritised Australian Government’s housing research agenda. The end result has been a modicum of qualitative commentary relating to holding costs. There have been few attempts at finer-tuned analysis that exposes a quantified level of holding cost calculated with underlying rigour. Holding costs can take many forms, but they inevitably involve the computation of “carrying costs” of an initial outlay that has yet to fully realise its ultimate yield. Although sometimes considered a “hidden” cost, it is submitted that holding costs prospectively represent a major determinate of value. If this is the case, then considered in the context of housing affordability, it is therefore potentially pervasive.
Resumo:
This paper discusses the development of a dynamic model for a torpedo shaped sub- marine. Expressions for hydrostatic, added mass, hydrodynamic, control surface and pro- peller forces and moments are derived from first principles. Experimental data obtained from flume tests of the submarine are inserted into the model in order to provide computer simulations of the open loop behavior of the system.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.
Resumo:
This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.