969 resultados para Many-electron Problem
Resumo:
La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.
Resumo:
The inbound logistic for feeding the workstation inside the factory represents a critical issue in the car manufacturing industry. Nowadays, this issue is even more critical than in the past since more types of car are being produced in the assembly lines. Consequently, as workstations have to install many types of components, they also need to have an inventory of different types of the component in a compact space. The replenishment is a critical issue since a lack of inventory could cause line stoppage or reworking. On the other hand, an excess of inventory could increase the holding cost or even block the replenishment paths. The decision of the replenishment routes cannot be made without taking into consideration the inventory needed by each station during the production time which will depend on the production sequence. This problem deals with medium-sized instances and it is solved using online solvers. The contribution of this paper is a MILP for the replenishment and inventory of the components in a car assembly line.
Resumo:
The appearance of large geolocated communication datasets has recently increased our understanding of how social networks relate to their physical space. However, many recurrently reported properties, such as the spatial clustering of network communities, have not yet been systematically tested at different scales. In this work we analyze the social network structure of over 25 million phone users from three countries at three different scales: country, provinces and cities. We consistently find that this last urban scenario presents significant differences to common knowledge about social networks. First, the emergence of a giant component in the network seems to be controlled by whether or not the network spans over the entire urban border, almost independently of the population or geographic extension of the city. Second, urban communities are much less geographically clustered than expected. These two findings shed new light on the widely-studied searchability in self-organized networks. By exhaustive simulation of decentralized search strategies we conclude that urban networks are searchable not through geographical proximity as their country-wide counterparts, but through an homophily-driven community structure.
Resumo:
The mathematical underpinning of the pulse width modulation (PWM) technique lies in the attempt to represent “accurately” harmonic waveforms using only square forms of a fixed height. The accuracy can be measured using many norms, but the quality of the approximation of the analog signal (a harmonic form) by a digital one (simple pulses of a fixed high voltage level) requires the elimination of high order harmonics in the error term. The most important practical problem is in “accurate” reproduction of sine-wave using the same number of pulses as the number of high harmonics eliminated. We describe in this paper a complete solution of the PWM problem using Padé approximations, orthogonal polynomials, and solitons. The main result of the paper is the characterization of discrete pulses answering the general PWM problem in terms of the manifold of all rational solutions to Korteweg-de Vries equations.
Resumo:
The two-dimensional electron gas formed at the semiconductor heterointerface is a theater for many intriguing plays of physics. The fractional quantum Hall effect (FQHE), which occurs in strong magnetic fields and low temperatures, is the most fascinating of them. The concept of composite fermions and bosons not only is beautiful by itself but also has proved highly successful in providing pictorial interpretation of the phenomena associated with the FQHE.
Resumo:
Xylem cavitation in winter and recovery from cavitation in the spring were visualized in two species of diffuse-porous trees, Betula platyphylla var. japonica Hara and Salix sachalinensis Fr. Schm., by cryo-scanning electron microscopy after freeze-fixation of living twigs. Water in the vessel lumina of the outer three annual rings of twigs of B. platyphylla var. japonica and of S. sachalinensis gradually disappeared during the period from January to March, an indication that cavitation occurs gradually in these species during the winter. In April, when no leaves had yet expanded, the lumina of most of the vessels of both species were filled with water. Many vessel lumina in twigs of both species were filled with water during the period from the subsequent growth season to the beginning of the next winter. These observations indicate that recovery in spring occurs before the onset of transpiration and that water transport through twigs occurs during the subsequent growing season. We found, moreover, that vessels repeat an annual cycle of winter cavitation and spring recovery from cavitation for several years until irreversible cavitation occurs.
Resumo:
We report on a procedure for tissue preparation that combines thoroughly controlled physical and chemical treatments: quick-freezing and freeze-drying followed by fixation with OsO4 vapors and embedding by direct resin infiltration. Specimens of frog cutaneous pectoris muscle thus prepared were analyzed for total calcium using electron spectroscopic imaging/electron energy loss spectroscopy (ESI/EELS) approach. The preservation of the ultrastructure was excellent, with positive K/Na ratios revealed in the fibers by x-ray microanalysis. Clear, high-resolution EELS/ESI calcium signals were recorded from the lumen of terminal cisternae of the sarcoplasmic reticulum but not from longitudinal cisternae, as expected from previous studies carried out with different techniques. In many mitochondria, calcium was below detection whereas in others it was appreciable although at variable level. Within the motor nerve terminals, synaptic vesicles as well as some cisternae of the smooth endoplasmic reticulum yielded positive signals at variance with mitochondria, that were most often below detection. Taken as a whole, the present study reveals the potential of our experimental approach to map with high spatial resolution the total calcium within individual intracellular organelles identified by their established ultrastructure, but only where the element is present at high levels.
Resumo:
In this paper I review the ways in which the glassy state is obtained both in nature and in materials science and highlight a "new twist"--the recent recognition of polymorphism within the glassy state. The formation of glass by continuous cooling (viscous slowdown) is then examined, the strong/fragile liquids classification is reviewed, and a new twist-the possibility that the slowdown is a result of an avoided critical point-is noted. The three canonical characteristics of relaxing liquids are correlated through the fragility. As a further new twist, the conversion of strong liquids to fragile liquids by pressure-induced coordination number increases is demonstrated. It is then shown that, for comparable systems, it is possible to have the same conversion accomplished via a first-order transition within the liquid state during quenching. This occurs in the systems in which "polyamorphism" (polymorphism in the glassy state) is observed, and the whole phenomenology is accounted for by Poole's bond-modified van der Waals model. The sudden loss of some liquid degrees of freedom through such weak first-order transitions is then related to the polyamorphic transition between native and denatured hydrated proteins, since the latter are also glass-forming systems--water-plasticized, hydrogen bond-cross-linked chain polymers (and single molecule glass formers). The circle is closed with a final new twist by noting that a short time scale phenomenon much studied by protein physicists-namely, the onset of a sharp change in d
Resumo:
We consider a model of the photosystem II (PS II) reaction center in which its spectral properties result from weak (approximately 100 cm-1) excitonic interactions between the majority of reaction center chlorins. Such a model is consistent with a structure similar to that of the reaction center of purple bacteria but with a reduced coupling of the chlorophyll special pair. We find that this model is consistent with many experimental studies of PS II. The similarity in magnitude of the exciton coupling and energetic disorder in PS II results in the exciton states being structurally highly heterogeneous. This model suggests that P680, the primary electron donor of PS II, should not be considered a dimer but a multimer of several weakly coupled pigments, including the pheophytin electron acceptor. We thus conclude that even if the reaction center of PS II is structurally similar to that of purple bacteria, its spectroscopy and primary photochemistry may be very different.
Resumo:
The evolutionary stability of cooperation is a problem of fundamental importance for the biological and social sciences. Different claims have been made about this issue: whereas Axelrod and Hamilton's [Axelrod, R. & Hamilton, W. (1981) Science 211, 1390-1398] widely recognized conclusion is that cooperative rules such as "tit for tat" are evolutionarily stable strategies in the iterated prisoner's dilemma (IPD), Boyd and Lorberbaum [Boyd, R. & Lorberbaum, J. (1987) Nature (London) 327, 58-59] have claimed that no pure strategy is evolutionarily stable in this game. Here we explain why these claims are not contradictory by showing in what sense strategies in the IPD can and cannot be stable and by creating a conceptual framework that yields the type of evolutionary stability attainable in the IPD and in repeated games in general. Having established the relevant concept of stability, we report theorems on some basic properties of strategies that are stable in this sense. We first show that the IPD has "too many" such strategies, so that being stable does not discriminate among behavioral rules. Stable strategies differ, however, on a property that is crucial for their evolutionary survival--the size of the invasion they can resist. This property can be interpreted as a strategy's evolutionary robustness. Conditionally cooperative strategies such as tit for tat are the most robust. Cooperative behavior supported by these strategies is the most robust evolutionary equilibrium: the easiest to attain, and the hardest to disrupt.
Resumo:
In maritime transportation, decisions are made in a dynamic setting where many aspects of the future are uncertain. However, most academic literature on maritime transportation considers static and deterministic routing and scheduling problems. This work addresses a gap in the literature on dynamic and stochastic maritime routing and scheduling problems, by focusing on the scheduling of departure times. Five simple strategies for setting departure times are considered, as well as a more advanced strategy which involves solving a mixed integer mathematical programming problem. The latter strategy is significantly better than the other methods, while adding only a small computational effort.
Resumo:
We report absolute experimental integral cross sections (ICSs) for electron impact excitation of bands of electronic-states in furfural, for incident electron energies in the range 20-250 eV. Wherever possible, those results are compared to corresponding excitation cross sections in the structurally similar species furan, as previously reported by da Costa et al. [Phys. Rev. A 85, 062706 (2012)] and Regeta and Allan [Phys. Rev. A 91, 012707 (2015)]. Generally, very good agreement is found. In addition, ICSs calculated with our independent atom model (IAM) with screening corrected additivity rule (SCAR) formalism, extended to account for interference (I) terms that arise due to the multi-centre nature of the scattering problem, are also reported. The sum of those ICSs gives the IAM-SCAR+I total cross section for electron-furfural scattering. Where possible, those calculated IAM-SCAR+I ICS results are compared against corresponding results from the present measurements with an acceptable level of accord being obtained. Similarly, but only for the band I and band II excited electronic states, we also present results from our Schwinger multichannel method with pseudopotentials calculations. Those results are found to be in good qualitative accord with the present experimental ICSs. Finally, with a view to assembling a complete cross section data base for furfural, some binary-encounter-Bethe-level total ionization cross sections for this collision system are presented. (C) 2016 AIP Publishing LLC.
Resumo:
Detection of a single nuclear spin constitutes an outstanding problem in different fields of physics such as quantum computing or magnetic imaging. Here we show that the energy levels of a single nuclear spin can be measured by means of inelastic electron tunneling spectroscopy (IETS). We consider two different systems, a magnetic adatom probed with scanning tunneling microscopy and a single Bi dopant in a silicon nanotransistor. We find that the hyperfine coupling opens new transport channels which can be resolved at experimentally accessible temperatures. Our simulations evince that IETS yields information about the occupations of the nuclear spin states, paving the way towards transport-detected single nuclear spin resonance.
Resumo:
A suitable knowledge of the orientation and motion of the Earth in space is a common need in various fields. That knowledge has been ever necessary to carry out astronomical observations, but with the advent of the space age, it became essential for making observations of satellites and predicting and determining their orbits, and for observing the Earth from space as well. Given the relevant role it plays in Space Geodesy, Earth rotation is considered as one of the three pillars of Geodesy, the other two being geometry and gravity. Besides, research on Earth rotation has fostered advances in many fields, such as Mathematics, Astronomy and Geophysics, for centuries. One remarkable feature of the problem is in the extreme requirements of accuracy that must be fulfilled in the near future, about a millimetre on the tangent plane to the planet surface, roughly speaking. That challenges all of the theories that have been devised and used to-date; the paper makes a short review of some of the most relevant methods, which can be envisaged as milestones in Earth rotation research, emphasizing the Hamiltonian approach developed by the authors. Some contemporary problems are presented, as well as the main lines of future research prospected by the International Astronomical Union/International Association of Geodesy Joint Working Group on Theory of Earth Rotation, created in 2013.
Resumo:
In recent times the Douglas–Rachford algorithm has been observed empirically to solve a variety of nonconvex feasibility problems including those of a combinatorial nature. For many of these problems current theory is not sufficient to explain this observed success and is mainly concerned with questions of local convergence. In this paper we analyze global behavior of the method for finding a point in the intersection of a half-space and a potentially non-convex set which is assumed to satisfy a well-quasi-ordering property or a property weaker than compactness. In particular, the special case in which the second set is finite is covered by our framework and provides a prototypical setting for combinatorial optimization problems.