424 resultados para Coulomb blockadeeffect
Resumo:
El frente de un túnel puede colapsar si la presión aplicada sobre el es inferior a un valor limite denominado presión “critica” o “de colapso”. En este trabajo se desarrolla y presenta un mecanismo de rotura rotacional generado punto a punto para el cálculo de la presión de colapso del frente de túneles excavados en terrenos estratificados o en materiales que siguen un criterio de rotura nolineal. La solución propuesta es una solución de contorno superior en el marco del Análisis Límite y supone una generalización del mecanismo de rotura mas reciente existente en la bibliografía. La presencia de un terreno estratificado o con un criterio de rotura no-lineal implica una variabilidad espacial de las propiedades resistentes. Debido a esto, se generaliza el mecanismo desarrollado por Mollon et al. (2011b) para suelos, de tal forma que se puedan considerar valores locales del ángulo de rozamiento y de la cohesión. Además, la estratificación del terreno permite una rotura parcial del frente, por lo que se implementa esta posibilidad en el mecanismo, siendo la primera solución que emplea un mecanismo de rotura que se ajusta a la estratigrafía del terreno. Por otro lado, la presencia de un material con un criterio de rotura no-lineal exige introducir en el modelo, como variable de estudio, el estado tensional en el frente, el cual se somete al mismo proceso de optimización que las variables geométricas del mecanismo. Se emplea un modelo numérico 3D para validar las predicciones del mecanismo de Análisis Limite, demostrando que proporciona, con un esfuerzo computacional significativamente reducido, buenas predicciones de la presión critica, del tipo de rotura (global o parcial) en terrenos estratificados y de la geometría de fallo. El mecanismo validado se utiliza para realizar diferentes estudios paramétricos sobre la influencia de la estratigrafía en la presión de colapso. Igualmente, se emplea para elaborar cuadros de diseño de la presión de colapso para túneles ejecutados con tuneladora en macizos rocosos de mala calidad y para analizar la influencia en la estabilidad del frente del método constructivo. Asimismo, se lleva a cabo un estudio de fiabilidad de la estabilidad del frente de un túnel excavado en un macizo rocoso altamente fracturado. A partir de el se analiza como afectan las diferentes hipótesis acerca de los tipos de distribución y de las estructuras de correlación a los resultados de fiabilidad. Se investiga también la sensibilidad de los índices de fiabilidad a los cambios en las variables aleatorias, identificando las mas relevantes para el diseño. Por ultimo, se lleva a cabo un estudio experimental mediante un modelo de laboratorio a escala reducida. El modelo representa medio túnel, lo cual permite registrar el movimiento del material mediante una técnica de correlación de imágenes fotográficas. El ensayo se realiza con una arena seca y se controla por deformaciones mediante un pistón que simula el frente. Los resultados obtenidos se comparan con las estimaciones de la solución de Análisis Límite, obteniéndose un ajuste razonable, de acuerdo a la literatura, tanto en la geometría de rotura como en la presión de colapso. A tunnel face may collapse if the applied support pressure is lower than a limit value called the ‘critical’ or ‘collapse’ pressure. In this work, an advanced rotational failure mechanism generated ‘‘point-by-point” is developed to compute the collapse pressure for tunnel faces in layered (or stratified) grounds or in materials that follow a non-linear failure criterion. The proposed solution is an upper bound solution in the framework of limit analysis which extends the most advanced face failure mechanism in the literature. The excavation of the tunnel in a layered ground or in materials with a non-linear failure criterion may lead to a spatial variability of the strength properties. Because of this, the rotational mechanism recently proposed by Mollon et al. (2011b) for Mohr-Coulomb soils is generalized so that it can consider local values of the friction angle and of the cohesion. For layered soils, the mechanism needs to be extended to consider the possibility for partial collapse. The proposed methodology is the first solution with a partial collapse mechanism that can fit to the stratification. Similarly, the use of a nonlinear failure criterion introduces the need to introduce new parameters in the optimization problem to consider the distribution of normal stresses along the failure surface. A 3D numerical model is employed to validate the predictions of the limit analysis mechanism, demonstrating that it provides, with a significantly reduced computational effort, good predictions of critical pressure, of the type of collapse (global or partial) in layered soils, and of its geometry. The mechanism is then employed to conduct parametric studies of the influence of several geometrical and mechanical parameters on face stability of tunnels in layered soils. Similarly, the methodology has been further employed to develop simple design charts that provide the face collapse pressure of tunnels driven by TBM in low quality rock masses and to study the influence of the construction method. Finally, a reliability analysis of the stability of a tunnel face driven in a highly fractured rock mass is performed. The objective is to analyze how different assumptions about distributions types and correlation structures affect the reliability results. In addition, the sensitivity of the reliability index to changes in the random variables is studied, identifying the most relevant variables for engineering design. Finally, an experimental study is carried out using a small-scale laboratory model. The problem is modeled in half, cutting through the tunnel axis vertically, so that displacements of soil particles can be recorded by a digital image correlation technique. The tests were performed with dry sand and displacements are controlled by a piston that supports the soil. The results of the model are compared with the predictions of the Limit Analysis mechanism. A reasonable agreement, according to literature, is obtained between the shapes of the failure surfaces and between the collapse pressures observed in the model tests and computed with the analytical solution.
Resumo:
En España existen del orden de 1,300 grandes presas, de las cuales un 20% fueron construidas antes de los años 60. El hecho de que existan actualmente una gran cantidad de presas antiguas aún en operación, ha producido un creciente interés en reevaluar su seguridad empleando herramientas nuevas o modificadas que incorporan modelos de fallo teóricos más completos, conceptos geotécnicos más complejos y nuevas técnicas de evaluación de la seguridad. Una manera muy común de abordar el análisis de estabilidad de presas de gravedad es, por ejemplo, considerar el deslizamiento a través de la interfase presa-cimiento empleando el criterio de rotura lineal de Mohr-Coulomb, en donde la cohesión y el ángulo de rozamiento son los parámetros que definen la resistencia al corte de la superficie de contacto. Sin embargo la influencia de aspectos como la presencia de planos de debilidad en el macizo rocoso de cimentación; la influencia de otros criterios de rotura para la junta y para el macizo rocoso (ej. el criterio de rotura de Hoek-Brown); las deformaciones volumétricas que ocurren durante la deformación plástica en el fallo del macizo rocoso (i.e., influencia de la dilatancia) no son usualmente consideradas durante el diseño original de la presa. En este contexto, en la presente tesis doctoral se propone una metodología analítica para el análisis de la estabilidad al deslizamiento de presas de hormigón, considerando un mecanismo de fallo en la cimentación caracterizado por la presencia de una familia de discontinuidades. En particular, se considera la posibilidad de que exista una junta sub-horizontal, preexistente y persistente en el macizo rocoso de la cimentación, con una superficie potencial de fallo que se extiende a través del macizo rocoso. El coeficiente de seguridad es entonces estimado usando una combinación de las resistencias a lo largo de los planos de rotura, cuyas resistencias son evaluadas empleando los criterios de rotura no lineales de Barton y Choubey (1977) y Barton y Bandis (1990), a lo largo del plano de deslizamiento de la junta; y el criterio de rotura de Hoek y Brown (1980) en su versión generalizada (Hoek et al. 2002), a lo largo del macizo rocoso. La metodología propuesta también considera la influencia del comportamiento del macizo rocoso cuando este sigue una ley de flujo no asociada con ángulo de dilatancia constante (Hoek y Brown 1997). La nueva metodología analítica propuesta es usada para evaluar las condiciones de estabilidad empleando dos modelos: un modelo determinista y un modelo probabilista, cuyos resultados son el valor del coeficiente de seguridad y la probabilidad de fallo al deslizamiento, respectivamente. El modelo determinista, implementado en MATLAB, es validado usando soluciones numéricas calculadas mediante el método de las diferencias finitas, empleando el código FLAC 6.0. El modelo propuesto proporciona resultados que son bastante similares a aquellos calculados con FLAC; sin embargo, los costos computacionales de la formulación propuesta son significativamente menores, facilitando el análisis de sensibilidad de la influencia de los diferentes parámetros de entrada sobre la seguridad de la presa, de cuyos resultados se obtienen los parámetros que más peso tienen en la estabilidad al deslizamiento de la estructura, manifestándose además la influencia de la ley de flujo en la rotura del macizo rocoso. La probabilidad de fallo es obtenida empleando el método de fiabilidad de primer orden (First Order Reliability Method; FORM), y los resultados de FORM son posteriormente validados mediante simulaciones de Monte Carlo. Los resultados obtenidos mediante ambas metodologías demuestran que, para el caso no asociado, los valores de probabilidad de fallo se ajustan de manera satisfactoria a los obtenidos mediante las simulaciones de Monte Carlo. Los resultados del caso asociado no son tan buenos, ya que producen resultados con errores del 0.7% al 66%, en los que no obstante se obtiene una buena concordancia cuando los casos se encuentran en, o cerca de, la situación de equilibrio límite. La eficiencia computacional es la principal ventaja que ofrece el método FORM para el análisis de la estabilidad de presas de hormigón, a diferencia de las simulaciones de Monte Carlo (que requiere de al menos 4 horas por cada ejecución) FORM requiere tan solo de 1 a 3 minutos en cada ejecución. There are 1,300 large dams in Spain, 20% of which were built before 1960. The fact that there are still many old dams in operation has produced an interest of reevaluate their safety using new or updated tools that incorporate state-of-the-art failure modes, geotechnical concepts and new safety assessment techniques. For instance, for gravity dams one common design approach considers the sliding through the dam-foundation interface, using a simple linear Mohr-Coulomb failure criterion with constant friction angle and cohesion parameters. But the influence of aspects such as the persistence of joint sets in the rock mass below the dam foundation; of the influence of others failure criteria proposed for rock joint and rock masses (e.g. the Hoek-Brown criterion); or the volumetric strains that occur during plastic failure of rock masses (i.e., the influence of dilatancy) are often no considered during the original dam design. In this context, an analytical methodology is proposed herein to assess the sliding stability of concrete dams, considering an extended failure mechanism in its rock foundation, which is characterized by the presence of an inclined, and impersistent joint set. In particular, the possibility of a preexisting sub-horizontal and impersistent joint set is considered, with a potential failure surface that could extend through the rock mass; the safety factor is therefore computed using a combination of strength along the rock joint (using the nonlinear Barton and Choubey (1977) and Barton and Bandis (1990) failure criteria) and along the rock mass (using the nonlinear failure criterion of Hoek and Brown (1980) in its generalized expression from Hoek et al. (2002)). The proposed methodology also considers the influence of a non-associative flow rule that has been incorporated using a (constant) dilation angle (Hoek and Brown 1997). The newly proposed analytical methodology is used to assess the dam stability conditions, employing for this purpose the deterministic and probabilistic models, resulting in the sliding safety factor and the probability of failure respectively. The deterministic model, implemented in MATLAB, is validated using numerical solution computed with the finite difference code FLAC 6.0. The proposed deterministic model provides results that are very similar to those computed with FLAC; however, since the new formulation can be implemented in a spreadsheet, the computational cost of the proposed model is significantly smaller, hence allowing to more easily conduct parametric analyses of the influence of the different input parameters on the dam’s safety. Once the model is validated, parametric analyses are conducting using the main parameters that describe the dam’s foundation. From this study, the impact of the more influential parameters on the sliding stability analysis is obtained and the error of considering the flow rule is assessed. The probability of failure is obtained employing the First Order Reliability Method (FORM). The probabilistic model is then validated using the Monte Carlo simulation method. Results obtained using both methodologies show good agreement for cases in which the rock mass has a nonassociate flow rule. For cases with an associated flow rule errors between 0.70% and 66% are obtained, so that the better adjustments are obtained for cases with, or close to, limit equilibrium conditions. The main advantage of FORM on sliding stability analyses of gravity dams is its computational efficiency, so that Monte Carlo simulations require at least 4 hours on each execution, whereas FORM requires only 1 to 3 minutes on each execution.
Resumo:
Leyendo distintos artículos en la Revista de Obras Públicas (Jiménez Salas, 1945) uno recuerda a las grandes figuras como Coulomb (1773), Poncelet (1840), Rankine (1856), Culmann (1866), Mohr (1871), Boussinesq (1876) y otros muchos, que construyeron la base de un conocimiento que poco a poco irían facilitando la complicada tarea que suponía la construcción. Pero sus avances eran aproximaciones que presentaban notables diferencias frente al comportamiento de la naturaleza. Esas discrepancias con la naturaleza llegó un momento que se hicieron demasiado patentes. Importantes asientos en la construcción de los modernos edificios, rotura de presas de materiales sueltos y grandes corrimientos de tierras, por ejemplo durante la construcción del canal de Panamá, llevaron a la Sociedad Americana de Ingenieros Civiles (ASCE) a crear un comité que analizase las prácticas de la construcción de la época. Hechos similares se producían en Europa, por ejemplo en desmontes para ferrocarriles, que en el caso de Suecia supusieron unas cuantiosas perdidas materiales y humanas. El ingeniero austriaco-americano Karl Terzaghi (1883) había podido comprobar, en su práctica profesional, la carencia de conocimientos para afrontar muchos de los retos que la naturaleza ofrecía. Inicialmente buscó la respuesta en la geología pero encontró que ésta carecía de la definición necesaria para la práctica de la ingeniería, por lo que se lanzó a una denodada tarea investigadora basada en el método experimental. Comenzó en 1917 con escasos medios, pero pronto llegó a desarrollar algunos ensayos que le permitieron establecer los primeros conceptos de una nueva ciencia, la Mecánica de Suelos. Ciencia que ve la luz en 1925 con la publicación de su libro Erdbaumechanik auf bodenphysikalischer Grundlage. Rápidamente otras figuras empezaron a hacer sus contribuciones científicas y de divulgación, como es el caso del ingeniero austriaco-americano Arthur Casagrande (1902), cuya iniciativa de organizar el primer Congreso Internacional de Mecánica de Suelos e Ingeniería de Cimentaciones proporcionó el altavoz que necesitaba esa nueva ciencia para su difusión. Al mismo tiempo, más figuras internacionales se fueron uniendo a este período de grandes avances e innovadores puntos de vista. Figuras como Alec Skempton (1914) en el Reino Unido, Ralph Peck (1912) en los Estados Unidos o Laurits Bjerrum (1918) en Noruega sobresalieron entre los grandes de la época. Esta tesis investiga las vidas de estos geotécnicos, artífices de múltiples avances científicos de la nueva ciencia denominada Mecánica de Suelos. Todas estas grandes figuras de la geotecnia fueron presidentes, en distintos periodos, de la Sociedad Internacional de Mecánica de Suelos e Ingeniería de Cimentaciones. Se deja constancia de ello en las biografías que han sido elaboradas a partir de fuentes de variada procedencia y de los datos cruzados encontrados sobre estos extraordinarios geotécnicos. Así, las biografías de Terzaghi, Casagrande, Skempton, Peck y Bjerrum contribuyen no solo a su conocimiento individual sino que constituyen conjuntamente un punto de vista privilegiado para la comprensión de los acontecimientos vividos por la Mecánica de Suelos en el segundo tercio del siglo XX, extendiéndose en algunos casos hasta los albores del siglo XXI. Las aportaciones científicas de estos geotécnicos encuentran también su lugar en la parte técnica de esta tesis, en la que sus contribuciones individuales iniciales que configuran los distintos capítulos conservan sus puntos de vista originales, lo que permite tener una visión de los principios de la Mecánica de Suelos desde su mismo origen. On reading several articles in the journal, Revista de Obras Públicas (Jiménez Salas, 1945), one recalls such leading figures as Coulomb (1773), Poncelet (1840), Rankine (1856), Culmann (1866), Mohr (1871) and Boussinesq (1876) among many others, who created the basis of scientific knowledge that would make the complicated task of construction progressively easier. However, their advances were approximations which suffered considerable discrepancies when faced with the behaviour of the forces of nature. There came a time when such discrepancies became all too evident. Substantial soil settlements when constructing modern buildings, embankment dam failures and grave landslides, during the construction of the Panama Canal for example, led the American Society of Civil Engineers (ASCE) to form a committee in order to analyse construction practices of the time. Similar incidents had taken place in Europe, for example with railway slides, which in the case of Sweden, had resulted in heavy losses in both materials and human lives. During the practice of his career, the Austrian-American engineer Karl Terzaghi (1883) had encountered the many challenges posed by the forces of nature and the lack of knowledge at his disposal with which to overcome them. Terzaghi first sought a solution in geology only to discover that this lacked the necessary accuracy for the practice of engineering. He therefore threw himself into tireless research based on the experimental method. He began in 1917 on limited means but soon managed to develop several tests, which would allow him to establish the basic fundamentals of a new science; Soil Mechanics, a science which first saw the light of day on the publication of Terzaghi’s book, Erdbaumechanik auf bodenphysikalischer Grundlage. Other figures were quick to make their own scientific contributions. Such was the case of Austrian-American engineer, Arthur Casagrande (1902), whose initiative to organize the first International Congress of Soil Mechanics and Foundation Engineering provided the springboard that this science needed. At the same time, other international figures were becoming involved in this period of great advances and innovative concepts. Figures including the likes of Alec Skempton (1914) in the United Kingdom, Ralph Peck (1912) in the United States, and Laurits Bjerrum (1918) in Norway stood out amongst the greatest of their time. This thesis investigates the lives of these geotechnical engineers to whom we are indebted for a great many scientific advances in this new science known as Soil Mechanics. Moreover, each of these eminent figures held the presidency of the International Society of Soil Mechanics and Foundation Engineering, record of which can be found in their biographies, drawn from diverse sources, and by crosschecking and referencing all the available information on these extraordinary geotechnical engineers. Thus, the biographies of Terzaghi, Casagrande, Skempton, Peck and Bjerrum not only serve to provide knowledge on the individual, but moreover, as a collective, they present us with an exceptional insight into the important developments which took place in Soil Mechanics in the second third of the 20th century, and indeed, in some cases, up to the dawn of the 21st. The scientific contributions of these geotechnical engineers also find their place in the technical part of this thesis in which the initial individual contributions which make up several chapters retain their original approaches allowing us a view of the principles of Soil Mechanics from its very beginnings.
Resumo:
The filamentary model of the metal-insulator transition in randomly doped semiconductor impurity bands is geometrically equivalent to similar models for continuous transitions in dilute antiferromagnets and even to the λ transition in liquid He, but the critical behaviors are different. The origin of these differences lies in two factors: quantum statistics and the presence of long range Coulomb forces on both sides of the transition in the electrical case. In the latter case, in addition to the main transition, there are two satellite transitions associated with disappearance of the filamentary structure in both insulating and metallic phases. These two satellite transitions were first identified by Fritzsche in 1958, and their physical origin is explained here in geometrical and topological terms that facilitate calculation of critical exponents.
Resumo:
Proteins can be very tolerant to amino acid substitution, even within their core. Understanding the factors responsible for this behavior is of critical importance for protein engineering and design. Mutations in proteins have been quantified in terms of the changes in stability they induce. For example, guest residues in specific secondary structures have been used as probes of conformational preferences of amino acids, yielding propensity scales. Predicting these amino acid propensities would be a good test of any new potential energy functions used to mimic protein stability. We have recently developed a protein design procedure that optimizes whole sequences for a given target conformation based on the knowledge of the template backbone and on a semiempirical potential energy function. This energy function is purely physical, including steric interactions based on a Lennard-Jones potential, electrostatics based on a Coulomb potential, and hydrophobicity in the form of an environment free energy based on accessible surface area and interatomic contact areas. Sequences designed by this procedure for 10 different proteins were analyzed to extract conformational preferences for amino acids. The resulting structure-based propensity scales show significant agreements with experimental propensity scale values, both for α-helices and β-sheets. These results indicate that amino acid conformational preferences are a natural consequence of the potential energy we use. This confirms the accuracy of our potential and indicates that such preferences should not be added as a design criterion.
Resumo:
I conjecture that the mechanism of superconductivity in the cuprates is a saving, due to the improved screening resulting from Cooper pair formation, of the part of the Coulomb energy associated with long wavelengths and midinfrared frequencies. This scenario is shown to provide a plausible explanation of the trend of transition temperature with layering structure in the Ca-spaced compounds and to predict a spectacularly large decrease in the electron-energy-loss spectroscopy cross-section in the midinfrared region on transition to the superconducting state, as well as less spectacular but still surprisingly large changes in the optical behavior. Existing experimental results appear to be consistent with this picture.
Resumo:
The design, realization, and test performances of an electronic junction based on single-electron phenomena that works in the air at room temperature are hereby reported. The element consists of an electrochemically etched sharp tungsten stylus over whose tip a nanometer-size crystal was synthesized. Langmuir-Blodgett films of cadmium arachidate were transferred onto the stylus and exposed to a H2S atmosphere to yield CdS nanocrystals (30-50 angstrom in diameter) imbedded into an organic matrix. The stylus, biased with respect to a flat electrode, was brought to the tunnel distance from the film and a constant gap value was maintained by a piezo-electric actuator driven by a feedback circuit fed by the tunneling current. With this set-up, it is possible to measure the behavior of the current flowing through the quantum dot when a bias voltage is applied. Voltage-current characteristics measured in the system displayed single-electron trends such as a Coulomb blockade and Coulomb staircase and revealed capacitance values as small as 10(-19) F.
Resumo:
The dynamics of proton binding to the extracellular and the cytoplasmic surfaces of the purple membrane were measured by laser-induced proton pulses. Purple membranes, selectively labeled by fluorescein at Lys-129 of bacteriorhodopsin, were pulsed by protons released in the aqueous bulk from excited pyranine (8-hydroxy-1,3,6-pyrenetrisulfonate) and the reaction of protons with the indicators was measured. Kinetic analysis of the data imply that the two faces of the membrane differ in their buffer capacities and in their rates of interaction with bulk protons. The extracellular surface of the purple membrane contains one anionic proton binding site per protein molecule with pK = 5.1. This site is within a Coulomb cage radius (approximately 15 A) from Lys-129. The cytoplasmic surface of the purple membrane bears 4-5 protonable moieties (pK = 5.1) that, due to close proximity, function as a common proton binding site. The reaction of the proton with this cluster is at a very fast rate (3.10(10) M-1.s-1). The proximity between the elements is sufficiently high that even in 100 mM NaCl they still function as a cluster. Extraction of the chromophore retinal from the protein has a marked effect on the carboxylates of the cytoplasmic surface, and two to three of them assume positions that almost bar their reaction with bulk protons. The protonation dynamics determined at the surface of the purple membrane is of relevance both for the vectorial proton transport mechanism of bacteriorhodopsin and for energy coupling, not only in halobacteria, but also in complex chemiosmotic systems such as mitochondrial and thylakoid membranes.
Resumo:
We study the electronic structure of gated graphene sheets. We consider both infinite graphene and finite width ribbons. The effect of Coulomb interactions between the electrically injected carriers and the coupling to the external gate are computed self-consistently in the Hartree approximation. We compute the average density of extra carriers n2D, the number of occupied subbands, and the density profiles as a function of the gate potential Vg. We discuss quantum corrections to the classical capacitance and we calculate the threshold Vg above which semiconducting armchair ribbons conduct. We find that the ideal conductance of perfectly transmitting wide ribbons is proportional to the square root of the gate voltage.
Resumo:
We study a single-electron transistor (SET) based upon a II–VI semiconductor quantum dot doped with a single-Mn ion. We present evidence that this system behaves like a quantum nanomagnet whose total spin and magnetic anisotropy depend dramatically both on the number of carriers and their orbital nature. Thereby, the magnetic properties of the nanomagnet can be controlled electrically. Conversely, the electrical properties of this SET depend on the quantum state of the Mn spin, giving rise to spin-dependent charging energies and hysteresis in the Coulomb blockade oscillations of the linear conductance.
Resumo:
Spin–orbit coupling changes graphene, in principle, into a two-dimensional topological insulator, also known as quantum spin Hall insulator. One of the expected consequences is the existence of spin-filtered edge states that carry dissipationless spin currents and undergo no backscattering in the presence of non-magnetic disorder, leading to quantization of conductance. Whereas, due to the small size of spin–orbit coupling in graphene, the experimental observation of these remarkable predictions is unlikely, the theoretical understanding of these spin-filtered states is shedding light on the electronic properties of edge states in other two-dimensional quantum spin Hall insulators. Here we review the effect of a variety of perturbations, like curvature, disorder, edge reconstruction, edge crystallographic orientation, and Coulomb interactions on the electronic properties of these spin filtered states.
Resumo:
The appearance of ferromagnetic correlations among π electrons of phenanthrene (C14H10) molecules in the herringbone structure is proven for K doped clusters both by ab initio quantum-chemistry calculations and by the direct solution of the many-body Pariser-Parr-Pople Hamiltonian. Magnetic ground states are predicted for one or three additional electrons per phenanthrene molecule. These results are a consequence of the small overlap between the lowest unoccupied molecular orbitals (and lowest unoccupied molecular orbitals + 1) of neutral neighboring phenanthrene molecules, which makes the gain in energy by delocalization similar to the corresponding increase due to the Coulomb interaction.
Resumo:
The first few low-lying spin states of alternant polycyclic aromatic hydrocarbon (PAH) molecules of several shapes showing defect states induced by contour hydrogenation have been studied both by ab initio methods and by a precise numerical solution of Pariser-Parr-Pople (PPP) interacting model. In accordance with Lieb's theorem, the ground state shows a spin multiplicity equal to one for balanced molecules, and it gets larger values for imbalanced molecules (that is, when the number of π electrons on both subsets is not equal). Furthermore, we find a systematic decrease of the singlet-triplet splitting as a function of the distance between defects, regardless of whether the ground state is singlet or triplet. For example, a splitting smaller than 0.001 eV is obtained for a medium size C46H28 PAH molecule (di-hydrogenated [11]phenacene) showing a singlet ground state. We conclude that π electrons unbound by lattice defects tend to remain localized and unpaired even when long-range Coulomb interaction is taken into account. Therefore they show a biradical character (polyradical character for more than two defects) and should be studied as two or more local doublets. The implications for electron transport are potentially important since these unpaired electrons can trap traveling electrons or simply flip their spin at a very small energy cost.
Resumo:
Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an 'effective' Hamiltonian including only on-site interactions (Hubbard)? The performance of CI will be checked on small molecules. The electronic structure of azulene and fused azulene will be used to illustrate several aspects of the method. As regards graphene, several questions will be considered: (i) paramagnetic versus antiferromagnetic solutions, (ii) forbidden gap versus dot size, (iii) graphene nano-ribbons, and (iv) optical properties.
Resumo:
A method to calculate the effective spin Hamiltonian for a transition metal impurity in a non-magnetic insulating host is presented and applied to the paradigmatic case of Fe in MgO. In the first step we calculate the electronic structure employing standard density functional theory (DFT), based on generalized gradient approximation (GGA), using plane waves as a basis set. The corresponding basis of atomic-like maximally localized Wannier functions is derived and used to represent the DFT Hamiltonian, resulting in a tight-binding model for the atomic orbitals of the magnetic impurity. The third step is to solve, by exact numerical diagonalization, the N electron problem in the open shell of the magnetic atom, including both effects of spin–orbit and Coulomb repulsion. Finally, the low energy sector of this multi-electron Hamiltonian is mapped into effective spin models that, in addition to the spin matrices S, can also include the orbital angular momentum L when appropriate. We successfully apply the method to Fe in MgO, considering both the undistorted and Jahn–Teller (JT) distorted cases. Implications for the influence of Fe impurities on the performance of magnetic tunnel junctions based on MgO are discussed.